Framework for Physical Perception of Photography

Framework for Physical Perception of Photography

Framework for Physical Perception of Photography

Menu

Menu

Abstract

The Resonance Camera is a physical–computational imaging system that translates visual information and invisible environmental phenomena into a tactile spatial surface. Rather than functioning as a passive image-recording device, the camera operates as an active, embodied interface: a system that physically indexes light, time, and unseen forces through motion.

The project is being developed in clearly defined phases. Phase 1 (completed) establishes a functional imaging pipeline using an Arduino-compatible camera module with its native lens, paired with a distributed processing architecture. Image data captured by the Arduino system is transmitted to a Raspberry Pi 4, where custom Python scripts perform image processing, spatial transformation, and computational reinterpretation (including masking, object removal, and remapping).

Phase 2 (in progress) integrates environmental sensing—such as magnetometers and related field-sensing devices—whose data is synchronized with image frames and fused computationally. The combined data stream is then rendered physically through an updated solenoid-driven pin array, producing a tactile surface that simultaneously represents photographic imagery and normally imperceptible forces.

By merging embedded imaging, computation, sensing, and actuation, the 4D Experimental Camera reframes image-making as a tactile, temporal, and spatial experience—blurring the boundaries between camera, sculpture, and perceptual interface.


1. Motivation & Contribution

Traditional cameras collapse space and time into flat visual representations, prioritizing optical fidelity while distancing viewers from embodied engagement with light and environment. This project challenges that paradigm by reintroducing physicality into image formation—allowing images to be felt, not only seen.

Rather than relying on interchangeable lenses or photographic optimization, the system intentionally uses a fixed, embedded camera module. This constraint foregrounds computation, sensing, and actuation over optical refinement, aligning the camera more closely with measurement, translation, and interaction than with conventional representation.

Key Contributions

  • A phased physical–computational camera system, progressing from embedded image capture and Python-based transformation to multi-sensor environmental visualization

  • A distributed hardware architecture separating real-time acquisition from computational processing

  • A tactile imaging surface that physically renders both photographic content and non-visual environmental forces

  • Integration of Arduino-based acquisition, Raspberry Pi processing, Python computation, and solenoid actuation

  • An open, modular framework suitable for experimental imaging, artistic research, and tangible media systems


2. System Overview

Conceptual Pipeline

Light + Environmental Forces
Arduino Camera Module (native lens)
Arduino (camera control & sensor acquisition)
Raspberry Pi 4 (Python processing & sensor fusion)
Mapping Algorithms
Solenoid Drivers
Dynamic Tactile Surface



Functional Description

  • Image Capture (Phase 1 – Complete):
    An Arduino-compatible camera module captures images through its native fixed lens. The Arduino handles camera triggering and low-level data acquisition.

  • Processing & Transformation:
    Image data is transmitted to a Raspberry Pi 4, where Python-based scripts perform computational image processing, including masking, object removal, spatial remapping, and data normalization.

  • Sensor Integration (Phase 2 – In Progress):
    Environmental sensors (e.g., magnetometers) connected to the Arduino collect real-time field data. Sensor readings are synchronized with image frames and transmitted to the Raspberry Pi for fusion.

  • Physical Output:
    The Raspberry Pi computes a combined image–sensor representation and generates actuation commands. These commands are sent back to the Arduino, which handles time-critical solenoid control, producing a tactile relief surface.

The result is a continuously evolving, touchable image that reflects both visible imagery and invisible environmental dynamics.



3. Technical Architecture



┌────────────────────────────────────────────┐
│ IMAGING & SENSING FRONT END │
│ - Arduino-compatible camera (native lens) │
│ - Magnetometer & environmental sensors │
├────────────────────────────────────────────┤
│ ACQUISITION & CONTROL LAYER │
│ - Arduino microcontroller │
│ - Camera triggering │
│ - Sensor polling │
│ - Solenoid timing control │
├────────────────────────────────────────────┤
│ COMPUTATIONAL LAYER │
│ - Raspberry Pi 4 │
│ - Python image processing │
│ - Sensor fusion & spatial mapping │
│ - Actuation command generation │
├────────────────────────────────────────────┤
│ OUTPUT PIN ARRAY │
│ - Solenoid-driven pin grid │
│ - Precision guide plate & carrier │
│ - Modular, scalable design │
└────────────────────────────────────────────┘




Key Components


  • Arduino-compatible camera module (native lens) — embedded image capture with fixed optics

  • Raspberry Pi 4 — primary processing unit running Python for image transformation, sensor fusion, and mapping

  • Magnetometer & environmental sensors — capture of non-visual forces such as magnetic fields

  • Solenoid-based pin array — discrete, fast vertical actuation forming a tactile image surface

  • Custom Python scripts — image editing, spatial remapping, sensor fusion, and actuation logic

  • 3D-printed frame, guide plate, and pin carriers — mechanical alignment, modularity, and scalability

  • Arduino-compatible camera module (native lens) — embedded image capture with fixed optics, optimized for computational translation rather than optical variability

  • Raspberry Pi 4 (host computer) — primary processing unit running Python for image handling, data fusion, and actuator mapping

  • Magnetometer & environmental sensors — capture of non-visual forces (e.g., magnetic fields) synchronized with image data

  • Solenoid-based pin array — discrete, fast vertical actuation translating combined image and sensor data into tactile form

  • Custom Python scripts — image processing, masking, spatial remapping, sensor fusion, and solenoid control logic

  • 3D-printed frame, guide plate, and pin carriers — mechanical alignment, modularity, and scalability of the pin-array system


4. Bill of Materials (BOM)

Component / System Quantity Est. Cost (USD)

--------------------------------------------- ---------- ---------------

Arduino Mega (or compatible) 1 $45

Raspberry Pi 4 (4GB) 1 $55

Arduino-compatible camera (native lens) 1 $30

Solenoid actuators (pin array) ~16 $100

Magnetometer + environmental sensors 1 set $25

MOSFETs / solenoid driver boards — $30

5–12V regulated power supply 1 $20

3D-printed frame & mechanical parts — $20

Wiring, connectors, fasteners — $25


Total Estimated Cost: ≈ $350



5. Research Context & Motivation

Theoretical Framework

  • Phenomenology of Perception (Merleau-Ponty): perception as embodied engagement rather than detached observation

  • Embodied Cognition (Varela et al.): meaning arises through sensorimotor interaction

  • Tangible Media (Ishii & Ullmer): computation expressed through physical form

Research Gap:
While imaging technologies increasingly pursue resolution and realism, few address how images might engage the body directly or reveal non-visual forces. This project treats imaging as a translation problem—mapping light and environmental data into physical experience.


6. Future Work

Higher-density pin arrays and smoother solenoid actuation

  • Expanded sensor fusion (electromagnetic fields, motion, proximity)

  • Time-based recording and replay of tactile images

  • Reduced latency through tighter acquisition–actuation coupling

  • Exhibition-scale and participatory installations



8. References

  • Varela, F. J., Thompson, E., & Rosch, E. (1991). The Embodied Mind. MIT Press.

  • Merleau-Ponty, M. (1962). Phenomenology of Perception. Routledge.

  • Ishii, H., & Ullmer, B. (1997). Tangible Bits. CHI’97.

  • Clark, A. (1997). Being There. MIT Press.


9. Citation


GitHub repository: https://github.com/CJD-11/Resonance-Camera









Abstract

The Resonance Camera is a physical–computational imaging system that translates visual information and invisible environmental phenomena into a tactile spatial surface. Rather than functioning as a passive image-recording device, the camera operates as an active, embodied interface: a system that physically indexes light, time, and unseen forces through motion.

The project is being developed in clearly defined phases. Phase 1 (completed) establishes a functional imaging pipeline using an Arduino-compatible camera module with its native lens, paired with a distributed processing architecture. Image data captured by the Arduino system is transmitted to a Raspberry Pi 4, where custom Python scripts perform image processing, spatial transformation, and computational reinterpretation (including masking, object removal, and remapping).

Phase 2 (in progress) integrates environmental sensing—such as magnetometers and related field-sensing devices—whose data is synchronized with image frames and fused computationally. The combined data stream is then rendered physically through an updated solenoid-driven pin array, producing a tactile surface that simultaneously represents photographic imagery and normally imperceptible forces.

By merging embedded imaging, computation, sensing, and actuation, the 4D Experimental Camera reframes image-making as a tactile, temporal, and spatial experience—blurring the boundaries between camera, sculpture, and perceptual interface.


1. Motivation & Contribution

Traditional cameras collapse space and time into flat visual representations, prioritizing optical fidelity while distancing viewers from embodied engagement with light and environment. This project challenges that paradigm by reintroducing physicality into image formation—allowing images to be felt, not only seen.

Rather than relying on interchangeable lenses or photographic optimization, the system intentionally uses a fixed, embedded camera module. This constraint foregrounds computation, sensing, and actuation over optical refinement, aligning the camera more closely with measurement, translation, and interaction than with conventional representation.

Key Contributions

  • A phased physical–computational camera system, progressing from embedded image capture and Python-based transformation to multi-sensor environmental visualization

  • A distributed hardware architecture separating real-time acquisition from computational processing

  • A tactile imaging surface that physically renders both photographic content and non-visual environmental forces

  • Integration of Arduino-based acquisition, Raspberry Pi processing, Python computation, and solenoid actuation

  • An open, modular framework suitable for experimental imaging, artistic research, and tangible media systems


2. System Overview

Conceptual Pipeline

Light + Environmental Forces
Arduino Camera Module (native lens)
Arduino (camera control & sensor acquisition)
Raspberry Pi 4 (Python processing & sensor fusion)
Mapping Algorithms
Solenoid Drivers
Dynamic Tactile Surface



Functional Description

  • Image Capture (Phase 1 – Complete):
    An Arduino-compatible camera module captures images through its native fixed lens. The Arduino handles camera triggering and low-level data acquisition.

  • Processing & Transformation:
    Image data is transmitted to a Raspberry Pi 4, where Python-based scripts perform computational image processing, including masking, object removal, spatial remapping, and data normalization.

  • Sensor Integration (Phase 2 – In Progress):
    Environmental sensors (e.g., magnetometers) connected to the Arduino collect real-time field data. Sensor readings are synchronized with image frames and transmitted to the Raspberry Pi for fusion.

  • Physical Output:
    The Raspberry Pi computes a combined image–sensor representation and generates actuation commands. These commands are sent back to the Arduino, which handles time-critical solenoid control, producing a tactile relief surface.

The result is a continuously evolving, touchable image that reflects both visible imagery and invisible environmental dynamics.



3. Technical Architecture



┌────────────────────────────────────────────┐
│ IMAGING & SENSING FRONT END │
│ - Arduino-compatible camera (native lens) │
│ - Magnetometer & environmental sensors │
├────────────────────────────────────────────┤
│ ACQUISITION & CONTROL LAYER │
│ - Arduino microcontroller │
│ - Camera triggering │
│ - Sensor polling │
│ - Solenoid timing control │
├────────────────────────────────────────────┤
│ COMPUTATIONAL LAYER │
│ - Raspberry Pi 4 │
│ - Python image processing │
│ - Sensor fusion & spatial mapping │
│ - Actuation command generation │
├────────────────────────────────────────────┤
│ OUTPUT PIN ARRAY │
│ - Solenoid-driven pin grid │
│ - Precision guide plate & carrier │
│ - Modular, scalable design │
└────────────────────────────────────────────┘




Key Components


  • Arduino-compatible camera module (native lens) — embedded image capture with fixed optics

  • Raspberry Pi 4 — primary processing unit running Python for image transformation, sensor fusion, and mapping

  • Magnetometer & environmental sensors — capture of non-visual forces such as magnetic fields

  • Solenoid-based pin array — discrete, fast vertical actuation forming a tactile image surface

  • Custom Python scripts — image editing, spatial remapping, sensor fusion, and actuation logic

  • 3D-printed frame, guide plate, and pin carriers — mechanical alignment, modularity, and scalability

  • Arduino-compatible camera module (native lens) — embedded image capture with fixed optics, optimized for computational translation rather than optical variability

  • Raspberry Pi 4 (host computer) — primary processing unit running Python for image handling, data fusion, and actuator mapping

  • Magnetometer & environmental sensors — capture of non-visual forces (e.g., magnetic fields) synchronized with image data

  • Solenoid-based pin array — discrete, fast vertical actuation translating combined image and sensor data into tactile form

  • Custom Python scripts — image processing, masking, spatial remapping, sensor fusion, and solenoid control logic

  • 3D-printed frame, guide plate, and pin carriers — mechanical alignment, modularity, and scalability of the pin-array system


4. Bill of Materials (BOM)

Component / System Quantity Est. Cost (USD)

--------------------------------------------- ---------- ---------------

Arduino Mega (or compatible) 1 $45

Raspberry Pi 4 (4GB) 1 $55

Arduino-compatible camera (native lens) 1 $30

Solenoid actuators (pin array) ~16 $100

Magnetometer + environmental sensors 1 set $25

MOSFETs / solenoid driver boards — $30

5–12V regulated power supply 1 $20

3D-printed frame & mechanical parts — $20

Wiring, connectors, fasteners — $25


Total Estimated Cost: ≈ $350



5. Research Context & Motivation

Theoretical Framework

  • Phenomenology of Perception (Merleau-Ponty): perception as embodied engagement rather than detached observation

  • Embodied Cognition (Varela et al.): meaning arises through sensorimotor interaction

  • Tangible Media (Ishii & Ullmer): computation expressed through physical form

Research Gap:
While imaging technologies increasingly pursue resolution and realism, few address how images might engage the body directly or reveal non-visual forces. This project treats imaging as a translation problem—mapping light and environmental data into physical experience.


6. Future Work

Higher-density pin arrays and smoother solenoid actuation

  • Expanded sensor fusion (electromagnetic fields, motion, proximity)

  • Time-based recording and replay of tactile images

  • Reduced latency through tighter acquisition–actuation coupling

  • Exhibition-scale and participatory installations



8. References

  • Varela, F. J., Thompson, E., & Rosch, E. (1991). The Embodied Mind. MIT Press.

  • Merleau-Ponty, M. (1962). Phenomenology of Perception. Routledge.

  • Ishii, H., & Ullmer, B. (1997). Tangible Bits. CHI’97.

  • Clark, A. (1997). Being There. MIT Press.


9. Citation


GitHub repository: https://github.com/CJD-11/Resonance-Camera









Abstract

The 4D Experimental Camera is a physical–computational imaging system that translates visual information and invisible environmental phenomena into a tactile spatial surface. Rather than functioning as a passive image-recording device, the camera operates as an active, embodied interface: a system that physically indexes light, time, and unseen forces through motion.

The project is being developed in clearly defined phases. Phase 1 (completed) establishes a functional imaging pipeline using an Arduino-compatible camera module with its native lens, paired with a distributed processing architecture. Image data captured by the Arduino system is transmitted to a Raspberry Pi 4, where custom Python scripts perform image processing, spatial transformation, and computational reinterpretation (including masking, object removal, and remapping).

Phase 2 (in progress) integrates environmental sensing—such as magnetometers and related field-sensing devices—whose data is synchronized with image frames and fused computationally. The combined data stream is then rendered physically through an updated solenoid-driven pin array, producing a tactile surface that simultaneously represents photographic imagery and normally imperceptible forces.

By merging embedded imaging, computation, sensing, and actuation, the 4D Experimental Camera reframes image-making as a tactile, temporal, and spatial experience—blurring the boundaries between camera, sculpture, and perceptual interface.


1. Motivation & Contribution

Traditional cameras collapse space and time into flat visual representations, prioritizing optical fidelity while distancing viewers from embodied engagement with light and environment. This project challenges that paradigm by reintroducing physicality into image formation—allowing images to be felt, not only seen.

Rather than relying on interchangeable lenses or photographic optimization, the system intentionally uses a fixed, embedded camera module. This constraint foregrounds computation, sensing, and actuation over optical refinement, aligning the camera more closely with measurement, translation, and interaction than with conventional representation.

Key Contributions

  • A phased physical–computational camera system, progressing from embedded image capture and Python-based transformation to multi-sensor environmental visualization

  • A distributed hardware architecture separating real-time acquisition from computational processing

  • A tactile imaging surface that physically renders both photographic content and non-visual environmental forces

  • Integration of Arduino-based acquisition, Raspberry Pi processing, Python computation, and solenoid actuation

  • An open, modular framework suitable for experimental imaging, artistic research, and tangible media systems


2. System Overview

Conceptual Pipeline

Light + Environmental Forces

Arduino Camera Module (native lens)

Arduino (camera control & sensor acquisition)

Raspberry Pi 4 (Python processing & sensor fusion)

Mapping Algorithms

Solenoid Drivers

Dynamic Tactile Surface




Functional Description

  • Image Capture (Phase 1 – Complete):
    An Arduino-compatible camera module captures images through its native fixed lens. The Arduino handles camera triggering and low-level data acquisition.

  • Processing & Transformation:
    Image data is transmitted to a Raspberry Pi 4, where Python-based scripts perform computational image processing, including masking, object removal, spatial remapping, and data normalization.

  • Sensor Integration (Phase 2 – In Progress):
    Environmental sensors (e.g., magnetometers) connected to the Arduino collect real-time field data. Sensor readings are synchronized with image frames and transmitted to the Raspberry Pi for fusion.

  • Physical Output:
    The Raspberry Pi computes a combined image–sensor representation and generates actuation commands. These commands are sent back to the Arduino, which handles time-critical solenoid control, producing a tactile relief surface.

The result is a continuously evolving, touchable image that reflects both visible imagery and invisible environmental dynamics.



3. Technical Architecture



┌────────────────────────────────────────────┐

│ IMAGING & SENSING FRONT END │

│ - Arduino-compatible camera (native lens) │

│ - Magnetometer & environmental sensors │

├────────────────────────────────────────────┤

│ ACQUISITION & CONTROL LAYER │

│ - Arduino microcontroller │

│ - Camera triggering │

│ - Sensor polling │

│ - Solenoid timing control │

├────────────────────────────────────────────┤

│ COMPUTATIONAL LAYER │

│ - Raspberry Pi 4 │

│ - Python image processing │

│ - Sensor fusion & spatial mapping │

│ - Actuation command generation │

├────────────────────────────────────────────┤

│ OUTPUT PIN ARRAY │

│ - Solenoid-driven pin grid │

│ - Precision guide plate & carrier │

│ - Modular, scalable design │

└────────────────────────────────────────────┘





Key Components


  • Arduino-compatible camera module (native lens) — embedded image capture with fixed optics

  • Raspberry Pi 4 — primary processing unit running Python for image transformation, sensor fusion, and mapping

  • Magnetometer & environmental sensors — capture of non-visual forces such as magnetic fields

  • Solenoid-based pin array — discrete, fast vertical actuation forming a tactile image surface

  • Custom Python scripts — image editing, spatial remapping, sensor fusion, and actuation logic

  • 3D-printed frame, guide plate, and pin carriers — mechanical alignment, modularity, and scalability

  • Arduino-compatible camera module (native lens) — embedded image capture with fixed optics, optimized for computational translation rather than optical variability

  • Raspberry Pi 4 (host computer) — primary processing unit running Python for image handling, data fusion, and actuator mapping

  • Magnetometer & environmental sensors — capture of non-visual forces (e.g., magnetic fields) synchronized with image data

  • Solenoid-based pin array — discrete, fast vertical actuation translating combined image and sensor data into tactile form

  • Custom Python scripts — image processing, masking, spatial remapping, sensor fusion, and solenoid control logic

  • 3D-printed frame, guide plate, and pin carriers — mechanical alignment, modularity, and scalability of the pin-array system


4. Bill of Materials (BOM)

Component / System Quantity Est. Cost (USD)

--------------------------------------------- ---------- ---------------

Arduino Mega (or compatible) 1 $45

Raspberry Pi 4 (4GB) 1 $55

Arduino-compatible camera (native lens) 1 $30

Solenoid actuators (pin array) ~16 $100

Magnetometer + environmental sensors 1 set $25

MOSFETs / solenoid driver boards — $30

5–12V regulated power supply 1 $20

3D-printed frame & mechanical parts — $20

Wiring, connectors, fasteners — $25


Total Estimated Cost: ≈ $350



5. Research Context & Motivation

Theoretical Framework

  • Phenomenology of Perception (Merleau-Ponty): perception as embodied engagement rather than detached observation

  • Embodied Cognition (Varela et al.): meaning arises through sensorimotor interaction

  • Tangible Media (Ishii & Ullmer): computation expressed through physical form

Research Gap:
While imaging technologies increasingly pursue resolution and realism, few address how images might engage the body directly or reveal non-visual forces. This project treats imaging as a translation problem—mapping light and environmental data into physical experience.


6. Future Work

Higher-density pin arrays and smoother solenoid actuation

  • Expanded sensor fusion (electromagnetic fields, motion, proximity)

  • Time-based recording and replay of tactile images

  • Reduced latency through tighter acquisition–actuation coupling

  • Exhibition-scale and participatory installations



8. References

  • Varela, F. J., Thompson, E., & Rosch, E. (1991). The Embodied Mind. MIT Press.

  • Merleau-Ponty, M. (1962). Phenomenology of Perception. Routledge.

  • Ishii, H., & Ullmer, B. (1997). Tangible Bits. CHI’97.

  • Clark, A. (1997). Being There. MIT Press.


9. Citation

Project page: https://www.coreydziadzio.com/altered-perception---4d-an-experimental-camera
GitHub repository: https://github.com/CJD-11/4D-Experimental-Camera



Menu

Menu

Menu