Project in collaboration with Jessica Scott-Dutcher.
This week we documented our learnings from last week’s play-testing , researched air fresheners, worked out basic systems diagrams for both the device interactions and the user experience, hacked into the most suitable air freshener ((twice)) and explored pitch detection on Pure data.
One of the key components of this project is pitch detection. On detecting the certain assigned pitch/frequency, our device would spray the scent for the user to smell.
Scenario: The user sings a note. He/she sings this one note continuously for a certain length of time, say ‘x’ seconds. After ‘x’ seconds the user stops to take a breath. At this point he/she smells a scent.
1. How does the user recognize the degree of correctness? How can we differentiate the feedback for 50% correct and 70% correct ?
2. Should there be a scent denoting when the user went wrong?
We decided to program our device to keep detecting the pitch at very small but regular intervals of time. And at each interval (when the user is singing), based on if the user’s pitch falls between the ideal range or not, it sprays the positive or negative smell. This way, whenever the user stops singing and breathes in, he/she gets to review his performance by the intensity of the combination of the two smells).
To do this, we are using PURE DATA or MaxMSP. I tried a basic pitch detection patcher on PureData using the inbuilt microphone on my computer. The current version of Pd-Extended doesn’t work too well with the latest Mac models. So I will have to re-do this on Max.
1. Design a pitch detection system for the vocal scale A#
2. Determine ideal frequency ranges for each of the 7 notes
3. Learn how to map it on MaxMSP & Connect it to Arduino
I often find myself getting lost in this exponentially expanding universe of unorganized information. At times there is so much to absorb that one begins to stray into a hypnotizing nothingness. Through my final project, I want to address a certain numbness that comes with this information overload – episodes in time where one just spaces out thinking about everything, yet nothing. A lot like how one is hypnotized by cloud watching.
I envision a space where people lie down on the floor and look up at projections that emulate the movements and characteristics of clouds in the sky. The shapes and size of these ‘cloud-like-particles’ will be determined by the information drawn in by ‘an API’.
The users lie down on the floor and put their head inside a geodesic dome that creates a personal space for them. Inside this dome they will be able to see projections of these ‘cloud-like-particles’. This visual experience will be accompanied by sounds of wind and the occasional birds chirping.
I imagine the ‘cloud-like-particles’ to be 2D and 3D geometric forms floating across the viewers cone of vision. Presented here is a moodboard of what it would look like.
Points of Discussion:
1. What would be a relevant Dataset for this project? One that is constantly changing (preferably increasing, as the information around us does). Ex. Wikipedia? How do I track the increase in data within an API?
2. Is there an API for how much time people spend actively browsing information they actually intended to look for and at what point do they drift into random browsing?
3. How can I project the program inside a dome? Do I use a screen instead? Or multiple screens?
a. Building a dome
An experience is a cumulative of the 5 human senses on various levels. A memorable experience is one that stimulates each of the 5 senses to a certain measurable degree. Designer Jinsop Lee, in his Five Senses Thoery, says that any experience can be graded on all the 5 senses.
For most learning environments, sight, sound and touch are usually high or, are active components of the experience. The sense of smell however, is often involuntary or is used as a secondary device. Memories are known to be associated with smell. The scents of rosemary, lemon, lavender, and orange can help improve memory in Alzheimer’s patients. The smell of citrus can motivate people to clean their homes. And sniffing spiced apples can lower blood pressure. Then why is it that smell is not actively used for associative learning.
As Alex Kauffmann, in one of his project documnetation puts it, “Smell is subjective, it’s ephemeral, and it’s not binary.” Experiencing smells is a gradual process that has a certain delayed gratification. In terms of interactivity, olfactory feedback is not easy to detect. Scents waft, and are hence, difficult to control and manipulate in space.
Vocal practice of Indian Classical Music is often a solitary experience that involves creating certain rituals for oneself. For example, one usually practices at a set time (ideally in the morning and/or evening – sunrise/sunset), drinks warm water an hour before practice, sits down on the floor etc. During the practice, singers becomes more and more involved in their own music and less aware of what is around them – meaning they respond passively to some external stimuli. Most singers I know, close their eyes when they practice and their sense of touch is occupied in determining the ‘taal’ (or beats).
For our final project we want to explore this complex interaction between sound and smell within the constraints of a tool that helps people learn how to sing the basic notes in Indian Classical Music. Each time one hits the exact frequency of a note, ‘the device’ sprays a certain scent as an affirmation of the same. When the singer is on the right track, the affirmative smell starts to become more condensed and subsequently, apparent to the singer. This gradual build up of the smell when the singer is consistently performing well is in perfect harmony with the process of vocal practice, that, just like these smells grows and becomes stronger with time.
What does this device do?
1. takes in frequency inputs from a microphone and it gives out an olfactory output for the user to experience.
2. it spreads an aroma that sets a tone for the singer
3. it alters the lighting around the space, again, to set an ambiance and a tone
Who is this device for?
This device is targeted at users with a certain musical inclination who are willing to learn Indian Classical vocal music and tries to slightly emulate the experience of one-on-one learning from a teacher or ‘guru’.
1. Pitch Detection
2. Olfactory Feedback
1. Tanpura(string instrument) Emulation – Plays 2 base notes that act as a guide to sing the other notes.
2. Records the singer’s progress
4. Feature to send/share progress report with their music teacher?
Our Expectations from the user testing next week:
We have following questions/ uncertainties about the concept, and we hope to resolve these in next class when we play-test the project with our classmates:
When does the user anticipate a feedback? Does the system give a feedback on successful completion of full octave or for every successful note?
What is the form factor? Is it a product that sits in front of the singer or is it an invisible, ubiquitous system in that room?
Should it be combined with a Tanpura (an electronic device similar to metronome that creates an aural canvas for the reference)?
How intense and how quick should the generation of smells be?
Dhruv has blogged about the results from the first play testing exercise. You can read about it here.