We are all connected. 

Such idea is a collective consciousness in modern society. In the age of pandemic, it has revived in our everyday discourse. We rely on the connectivity of the online space, and we miss the interconnected real world. This project is to mark the connectedness of people in actual space visually and aurally, to observe and to understand the entanglements between humans that can be traced in an ordinary day at an ordinary place.

The project starts with engaging the services of IP camera. We believe that those live videos captured by cameras set up everywhere across the globe represent a contemporary world view: we are simultaneously watching and being watched — one manifestation of the connectedness in a disparate world.

Live video images 實時視頻畫面

03:42:58 could be a random moment, but it could also be the time stamp of a particular incident. We recorded hours of streaming video materials around the world, and identified special moments in the mundane daily routine before or during the pandemic, indoor or outside on the street. Using our custom software we analyse the clips to make visible and audible the social intertwining of people.

Technically speaking, the project adopts #machinelearning model PoseNet to estimate realtime human poses in a video, and then constructs a dynamic web to wrap up all the detected people as a way to visualise the connectedness. The number of people in the frame and the distance between them are taken as variables deciding the sound to be generated.

Pose-estimation 姿勢識別

In the future development of this project, we will understand further the social behavioural patterns exhibited unconsciously in daily contact (one explicit example is the 2-meter social distancing rule imposed in the current situation), and refine our mapping between movement and visualisation/generated sound. We will also challenge the roles of observer/observed in a more interactive settings.



該項目從調研 IP 攝相機服務開始。我們相信,那些由全球各地設置的攝像機拍攝的現場視頻代表了當代的世界觀:我們同時觀看和觀看 — 這是分散宇宙中人們相互聯繫的一個表現。

03:42:58 可能是一個隨機時刻,但它也可能是特定事件的時間戳。我們記錄了世界各地數小時的流媒體視頻材料,並在大流行之前或大流行期間、室內或街道外等平凡的日常工作中識別出那些特殊的時刻。使用我們的定製軟體,我們分析這些素材並使人們的社交糾纏變得可見可聽。

從技術上講,該專案採用機器學習模型 PoseNet 來判斷影片中的即時人類姿勢,然後構建一個動態繭來包裹所有檢測到的人,作為可視化連接的一種方式。幀中的人數及其之間的距離被視為決定生成聲音的變數。在這個項目的未來發展中,我們將進一步瞭解在日常接觸中無意識地表現出的社會行為模式(一個明確的例子是當前情況下強加的2米社會隔位規則)並改進我們搭建的運動與視覺/聲音之間的映射。我們還將挑戰觀察者/被觀察者在更具互動性環境中的角色互換。

03:42:58 – Preliminary WIP Report

Related posts :