top of page

new way of storytelling
based on integration of biosignal waves,
natural and digital neural networks

Revelations of Dreams

Revelations of Dreams is the 1st audio-visual book 
with biosignals - brain waves and heart beat translations.


The collection is a part of MindDrawPlay project,

which emerged on the base of Art&Science integration
as a result of years of research and inspirations.

Every work in the collection contains 2 storylines.

1st - innovative animation based on specifically
chosen set of stable diffusion images.
​2nd - visualization of brain waves and heart beat,
and translation of both biosignals to sounds.

Attention, Meditation, Theta, Alpha, Beta, Gamma levels are estimated in real-time during recording from EEG signal and linked to sound samples. For Attention related sound its pitch is changing accordingly to changes of Attention level, for other sounds - their volume is changing. Heart beat is detected from ECG signal and also has related sound. 
Every audio translation reflects brain and heart state and its responses on images in the flow.

 

Overall sound level impacts saturation of colors. 
 

Therefore, you see the visual flow and hear a music of brain and heart recorded during the flow. Each item in the collection is a unique story with biodigital signature of the moment in the mindspace. It reveals and demonstrates general global integration process of our natural waves and its expansions with so-called "artificial intelligence".

Horizontal aspect ratio is for demonstration on desktop screens,
the collection is made in vertical ratio and based on vertical source images.

ROD#002
vlcsnap-2024-02-21-13h15m33s897.png
vlcsnap-2024-02-21-13h15m42s808.png
vlcsnap-2024-02-21-13h16m00s347.png
vlcsnap-2024-02-21-13h16m15s193.png
vlcsnap-2024-02-21-13h16m43s924.png
vlcsnap-2024-02-21-13h17m18s742.png
vlcsnap-2024-02-21-13h17m27s805.png
vlcsnap-2024-02-21-13h17m44s372.png
vlcsnap-2024-02-21-13h17m58s370.png
vlcsnap-2024-02-21-13h18m15s917.png
vlcsnap-2024-02-21-13h18m55s904.png
vlcsnap-2024-02-21-13h19m22s147.png
vlcsnap-2024-02-21-13h19m43s189.png
vlcsnap-2024-02-21-13h19m52s798.png
vlcsnap-2024-02-21-13h20m12s637.png
vlcsnap-2024-02-21-13h20m24s612.png
How does it work

How does it work?

The main idea of the collection is audio-visual translation of biosignals with stable diffusion art.

Let's start from short definitions of all terms and further explain in more details how it all is integrated.

1) Audio-visual translation of biosignals means video, where sound and visuals have direct connection with biological electrical signals from a person.


2) Biosignals are brain waves and heart beat,
which are recorded from a person via special devices:

- Brain waves in form of Electroencephalogram (EEG)

bwaves.JPG

recorded with mobile neurointerface (for example, BrainBit)

video example with BrainBit neurointerface and audio-visual translations of brain waves

изображение_2024-02-21_163234405.png

- Heart beat in form of Electrocardiogram (ECG)

heartbeat.JPG

recorded with mobile cardio device (for example, Callibri)

IMG_20240221_165821.jpg

3) Stable Diffusion (SD) art - images generated
by neural networks from text prompts.

ddff.png

video where I briefly explain - how visual animation is performed with stable diffusion images

Technical implementation

General Scheme

rod_scheme.png

1) Visual flow generation is based on set of SD images

Animation is performed inside MindDrawPlay app (C++, Qt) based on a sequence of transitions between set of source images. 
openCV methods are used for manifestation of the next image in a way similar to collage - starting from a random point of the image in growing area like a drop. When area fills the full size of the image - there's switch of the image (on a random one from the set) and a new point for the next manifestation is determined.
I call this model of animation as Dreamflow.

dfm1.JPG
dfm2.JPG
dfm3.JPG

Generated in this way flow is then transmitted via Spout to TouchDesigner, where additional trace effect (based on Pixel Sorting technique) is applied and the flow is overlayed with biosignals visualization.

imgsetexmp.png

input images are source spaces for Dreamflow

Why Dreamflow is amazing and I love it so much..

There are 2 reasons:
 

1) random order of images can produce meaningful stories due to specific choice of input images. These images are generated by me and many of them had similar prompts, therefore, in overlay it can produce stories. Moreover, even not much related images, when they presented with such smooth transitions give a feeling of some connection between.
What you can experience with Dreamflow - is how our brain produces relations between random appearances.

2) Dreamflow is ALWAYS DIFFERENT and has REALLY HUGE NUMBER
of possible stories. For a single transition between 2 images the total number of possible options is equal to permutations number in the set.
 

For example, currently I have 220 images in my set. Let's see how many possible different single transitions I have: 48180

perm2.JPG

1 minute story has ~ 10 transitions, therefore, total number of different stories is 215 817 634 060 114 469 644 800 :)

24 digits number called Septillion or Quadrillion 

perm10.JPG

Of course, not all transitions and stories look nice. I publish the best records and often it takes many trials to get the most interesting. 
The numbers above show the potential variety level. Each Dreamflow is about manifestation of a single story out of more than quadrillion possible ways. 

Dreamflow also has an option for manual choice - predefined set and order of images. I recorded few flows with it, but I see randomness as one of the key features of this visual representation model.

Dreamflow has some key parameters, which determine - how fast new images manifest and how fast drop area grows. It can be connected to Attention level, which gives direct control from your brain on the flow. However, for short visual stories as records - it looks better to have stable manifestation. Currently, I consider this option more for long sessions or live presentations as a neurofeedback example.

2) Attention, Meditation, brain waves levels estimation and audio translation.

BrainWaves Utility (C++, Qt) is used to receive signals from neurointerface device (via Bluetooth) and process it - apply filters, remove artifacts, perform signal decomposition (FFT) to extract values of specific frequency ranges - Theta, Alpha, Beta, Gamma. 

butilwin.png

What is Attention and Meditation?

These metrics reflect mental state of a person - of being more or less focused, excited or calm. There is no one established way for estimation of Attention and Meditation levels, the exact methods can vary for different fields of application.

Basically, Beta waves level is clearly associated with Attention, and Alpha waves – with Meditation, however, other waves also have a significant impact on mental states. Here, estimation of both metrics is based on relative expression of Theta, Alpha, Beta waves. 

Obtained inside the utility filterd EEG signal and 6 levels are then transmitted to TouchDesigner via OSC protocol, where it is visualized and overlayed with the main animation.

video where I briefly explain - how audio-visual translation works with simple generative models

brainwaves.jpg

image: diygenius.com/the-5-types-of-brain-waves

How does audio translation work?

There are 6 sound samples inside utility playing in loop for all related metrics.

For Attention related sound - its pitch is changing accordingly to changes of Attention level. When you are more focused - the tone becomes higher, and when more relaxed - the tone is lower.

For other sounds - their volume is changing in direct relation to levels. Additionally, there are some inner parameters to amplify volume of Gamma (which usually has very low level) and to decrease volume of Alpha and Beta.

For Heart beat - on each detected beat from ECG signal - sound of heart plays.

 

Every audio translation reflects brain and heart state and its responses on images in Dreamflow during recording.

3) Heart beat detection and Pressure Index.

HeartBeat Utility (C++, Qt) is used to receive signals from cardio device (via Bluetooth) and process it - detect heart beat peaks and estimate Heart rate and Pressure Index. Heart beat detection is based on modification on the Pan–Tompkins algorithm. Pressure Index is a specific metric (Baevsky's stress index), which is based on analysis of Heart rate variability. Currently, it does not have audio translation in my model, just visual representation of the value. However, I consider it valuable to have, because it gives more detailed description of Heart functioning process than just Heart rate value.

shutterstock_1576424071.jpg

image: Explode / Shutterstock.com

heartutil.JPG

Obtained inside the utility filtered ECG signal, Heart rate value and Pressure Index are then transmitted to TouchDesigner via OSC protocol, where it is visualized and overlayed with the main animation.

4) TouchDesigner (sound to saturation of colors, integration of all parts)

Color saturaion through sound volume level

Overall sound level modulates saturation of colors in the flow, which is controlled via HSV adjust operator in TouchDesigner. Additionally, MIDI controller is used during recording for more precise tunning of this effect with 2 parameters - controlling sensitivity of sound on the saturation and audio-reactivity delay.

Integration of all parts

All obtained biosignals, their metrics and visual animation flow
are combined in one TouchDesigner project.

vlcsnap-2024-02-25-17h08m31s210.png

Akai MIDIMiX is used for precise tuninning of color saturation from sound

touchscreen.jpeg

Attention line

Meditation line

networktd.jpg

main network

hbwpart.jpg

internal network for receiving and visualization of biosignals

5) Biosignals analysis

(post-processing of recorded data)

There is an option to analyze brain waves and heart beat activity with a simple statistics - to see how it was changing during recording.

 

This module is more for longer than 1 min sessions and comparison between different sessions.It allows to see overall dynamics of biosignals values and for specific time intervals, for example, at the beginning of the session and at the end. You can see in parallel brain waves and heart beat activity.

video demonstration of Statistics module representing analysis of dynamics of brain and heart activity changes

Further development

FURTHER

Mobile neurointerfaces and cardio devices become more available, and I see it as my mission - to show people how these devices can be used not only for medical and training purposes, but also for artistic creative expression and expanded communication. In particular, how it can be combined with modern art techniques, such as neural networks and generative models.

tdscreenmdpvisual.jpg

I have started to develop MindDrawPlay project in 2017, and experimented a lot with possible ways for visual animations connected with biosignals. Current implementation of Dreamflow looks pretty stable and balanced.

Recently, I have decided to do every day challenge with recording Revelations of Dreams. You can follow it on my Telegram, Instagram or X.

As a further development I would like to study more options for direct or more comprehensive translation of biosignals. For example, using Attention level to control the speed of the flow, what I have already mentioned in Dreamflow section.

One direction is about modification of audio translations with more direct interpretations of brain waves and sound effects.

 

Another one is to analyze specific patterns of brain and heart responses on particular images, extract some features of images and connect it with biosignals features. This may allow to make the flow more interactive in a content relation with a person reactions and drive the flow based on internal state of the person. 

For example, having subsets of images for different Attention levels or Heart rate. These sets can be determined individually for each person with some calibration procedure, when different types of images are presented, and biofeedback on them is recorded. Another option is about extracting some property of images (structural, color or semantic), which value will be related with Attention level.
This will provide an option to choose images based on the person internal state and responses.

One more option is about  development of horizontal aspect ratio - with adjustment of parameters for such form.

 

At some point, I would like to integrate Dreamflow in immersive VR space, and using additional sensors (like eye tracking) for more interactive direct control on transitions. More simple option for similar contol will be - to use gyroscope from a new version of neurointerface.

Additionally, specific brain waves or transitions between images can be visualized with different generative particles models, what I have started to work on already.
This adds more 3D effect and dynamics, but may look overcomplicated.

Dreamflow modification using ParticlesGPU for transitions: Attention level modulates expression of particles traces, Meditation level modulates spread of the particles.

Additional generative animation in 2 ways -
on brain waves visualization and on particles.

Revelations of Dreams is the moment, when I clearly realized that what I'm doing and what
I would love to do is not just audio-visual Art&Science, but research and storytelling.
It became a significant part of my life's book. Being audio-visual writer with biosignals translation
is that kind of novel and transformative experience that you want to share with the world.

I'm open for any questions and potential collaborations / exhibitions / performances, will be glad to communicate about my favourite project or any related topic.

  • Telegram
  • Facebook
  • Instagram
  • X
  • 8fc37b74b608a622588fbaa361485f32_edited_edited
bottom of page