A biodegradable material, this stretchy textile suits my soft engineering needs as I am interested in exploring sustainable wearables. I am interested in the automation of pleats and folds in fabrics. The fabric’s recovery and drape are crucial for this. This material features good recovery and drape, all-direction stretch and variable porosity based on material extension. The fabric’s softness offers an interesting texture that we are not used to in pleated garments – which would make the experience unexpected. The material’s porosity offers more control points in the fabric for movement, that would allow me to create folds in all directions.
This material is Flexible and Elastic. Though classified as a nonwoven textile, it feels like a rubber sheet.
Manufacturers/Distributors : unknown
Applications include apparel, accessories for hygiene, medical, and baby care.
Week 3 // Everything is Physical : The Art of Digital Mapping
I couldn’t get my data to work within the map. I tried many options and many ways to debug it but all I could get is only the map to work. I couldn’t figure out how to make the markers within the getData function.
An exploration of smell and sound, our project encourages students of Indian classical music to practice singing by filling the room with a delicious smell when they hit the correct notes. “Smell is subjective, it’s ephemeral, and it’s not binary”. Our project explores people’s perception of sound by associating it with smells.
Construction of the Box
For each note pressed, the box begins to listen to the singer’s pitch. Based on how the singer performs, the box projects a scent.
We used two distinct scents that are easily recognizable (or familiar) – so the user doesn’t spend time familiarizing himself with the smell. This way they can focus more on the experience. The first is a spicy, earthy scent ant the second, Lemon essential oil.
Project done in collaboration with Dhruv Damle & Jen Kagan
The biggest technical challenge in this project has been learning and executing our project in MaxMSP. Many trial-and-errors and a long learning curve later, we finally got the pitch detection to work on Max.
I used the object ‘retune~’ with its ‘@pitchdetection’ attribute to get the frequency values. Once that was in place, I used if statements to limit the frequency detection to the acceptable frequency ranges of all the 8 notes. I then added ‘on(1)’ and ‘off(0)’ to each note range. Next step – serial communication between Arduino and Max. Max sends out these 0 & 1 values in the form of messages to Arduino based on which two leds (eventually smell dispensers) would blink.
After this iteration, the next step was to detect the selected note only. I added 8 separate gates for each note. These gates would allow one to switch on and off the process of pitch detection. Messages from Arduino (switching on a button) will travel through these gates and start the process. There is a two way communication : Buttons open gates (arduino to max) and pitch detection triggers the smells (max to arduino).
Special thank yous to Justin, Jason, T.K, Gabe and Matt for all their help.
Last week, we hacked into the air freshener based on the Scratch and Sniff TV mechanism – and decided to add our custom bottles and scents for it.
One of the key components of this project is how the user reacts to the smells – which is what we will be testing in tomorrow’s class.
Some months ago, I stumbled across Fragrance Shop New York in East Village where I met Lolita, the owner. I spent 45 minutes in an engaging conversation with her about the history of her shop, her occupation, the power of smells, what they can do, her travel stories and much more. I went back to her store again, this time to find 2 distinct scents for our project.
The task : to find two distinct scents that are easily recognizable (or familiar) – so the user doesn’t spend time familiarizing himself with the smell. This way they can focus more on the experience.
After an intense testing session – I zeroed down on 3 scents.
1. Pink Martini – a citrus smell with a mix of grapefruit, oranges and lemon
2. Woodstock – a spicy, earthy scent.
3. Lemon essential oil – a condensed lemon smell
To make these scents project more in the air, I bought a cologne base which is essentially an alcohol that is to be mixed with the scent in a 60:40 / 80:20 proportion.
Next, we needed to get some spritzer bottles that would fit our scent dispenser framework. We found these at Muji.
In the Muji Store we also came across some Aroma Diffusers that were noteworthy. The interesting feature is that one can see the smell being dispensed (in the white foggy form). It can work as a good and subtle visual feedback.
One of the key components of this project is pitch detection. On detecting the certain assigned pitch/frequency, our device would spray the scent for the user to smell.
Scenario: The user sings a note. He/she sings this one note continuously for a certain length of time, say ‘x’ seconds. After ‘x’ seconds the user stops to take a breath. At this point he/she smells a scent.
1. How does the user recognize the degree of correctness? How can we differentiate the feedback for 50% correct and70% correct ?
2. Should there be a scent denoting when the user went wrong?
We decided to program our device to keep detecting the pitch at very small but regular intervals of time. And at each interval (when the user is singing), based on if the user’s pitch falls between the ideal range or not, it sprays the positive or negative smell. This way, whenever the user stops singing and breathes in, he/she gets to review his performance by the intensity of the combination of the two smells).
To do this, we are using PURE DATA or MaxMSP. I tried a basic pitch detection patcher on PureData using the inbuilt microphone on my computer. The current version of Pd-Extended doesn’t work too well with the latest Mac models. So I will have to re-do this on Max.
1. Design a pitch detection system for the vocal scale A#
2. Determine ideal frequency ranges for each of the 7 notes
3. Learn how to map it on MaxMSP & Connect it to Arduino
An experience is a cumulative of the 5 human senses on various levels. A memorable experience is one that stimulates each of the 5 senses to a certain measurable degree. Designer Jinsop Lee, in his Five Senses Thoery, says that any experience can be graded on all the 5 senses.
For most learning environments, sight, sound and touch are usually high or, are active components of the experience. The sense of smell however, is often involuntary or is used as a secondary device. Memories are known to be associated with smell. The scents of rosemary, lemon, lavender, and orange can help improve memory in Alzheimer’s patients. The smell of citrus can motivate people to clean their homes. And sniffing spiced apples can lower blood pressure. Then why is it that smell is not actively used for associative learning.
As Alex Kauffmann, in one of his project documnetation puts it, “Smell is subjective, it’s ephemeral, and it’s not binary.” Experiencing smells is a gradual process that has a certain delayed gratification. In terms of interactivity, olfactory feedback is not easy to detect. Scents waft, and are hence, difficult to control and manipulate in space.
Vocal practice of Indian Classical Music is often a solitary experience that involves creating certain rituals for oneself. For example, one usually practices at a set time (ideally in the morning and/or evening – sunrise/sunset), drinks warm water an hour before practice, sits down on the floor etc. During the practice, singers becomes more and more involved in their own music and less aware of what is around them – meaning they respond passively to some external stimuli. Most singers I know, close their eyes when they practice and their sense of touch is occupied in determining the ‘taal’ (or beats).
For our final project we want to explore this complex interaction between sound and smell within the constraints of a tool that helps people learn how to sing the basic notes in Indian Classical Music. Each time one hits the exact frequency of a note, ‘the device’ sprays a certain scent as an affirmation of the same. When the singer is on the right track, the affirmative smell starts to become more condensed and subsequently, apparent to the singer. This gradual build up of the smell when the singer is consistently performing well is in perfect harmony with the process of vocal practice, that, just like these smells grows and becomes stronger with time.
What does this device do?
1. takes in frequency inputs from a microphone and it gives out an olfactory output for the user to experience.
2. it spreads an aroma that sets a tone for the singer
3. it alters the lighting around the space, again, to set an ambiance and a tone
Who is this device for?
This device is targeted at users with a certain musical inclination who are willing to learn Indian Classical vocal music and tries to slightly emulate the experience of one-on-one learning from a teacher or ‘guru’.
1. Tanpura(string instrument) Emulation – Plays 2 base notes that act as a guide to sing the other notes.
2. Records the singer’s progress
4. Feature to send/share progress report with their music teacher?
Our Expectations from the user testing next week:
We have following questions/ uncertainties about the concept, and we hope to resolve these in next class when we play-test the project with our classmates:
When does the user anticipate a feedback? Does the system give a feedback on successful completion of full octave or for every successful note?
What is the form factor? Is it a product that sits in front of the singer or is it an invisible, ubiquitous system in that room?
Should it be combined with a Tanpura (an electronic device similar to metronome that creates an aural canvas for the reference)?
How intense and how quick should the generation of smells be?
Dhruv has blogged about the results from the first play testing exercise. You can read about it here.
For our midterm project, we decided to design a sonic experience where the user controls objects within a given soundscape. This product allows for one to control the ‘pan’ and ‘volume’ with specific bodily gestures.
The user pans the sound when he moves his head from left to right, and increases and decreases the volumes as he moves his head from front to back.
The Concept : Currently in its prototype stage, this product is designed to foster a series of ‘audio puzzles’ – games that the users can play using only auditory cues. These audio puzzles augment reality in order for the user to have an immersive sonic experience.
A schematic of the final circuit designed.
Testing the circuit
Arduino Micro + Adafruit accelorometer + BlueFruit Bluetooth link
Design & Fabrication
Built over a pair of existing headphones, the device holds within it an accelerometer that detects the user’s head movement. The earpieces are made using tin and acrylic.
For this project I used: Arduino Uno, A Servo Motor, A Potentiometer, Conductive Fabric, Thin Copper Wire, Hookup Wires, Breadboard and an LED.
This project is a prototype of a bigger project that I envisioned. The idea is to create a series of inputs, the outputs of which act as an input for the next sensor – a chain of inputs and outputs that work to light an LED. In the work presented, the circuit connects a potentiometer to the servo motor, controlling its rotation. As the motor touches the conductive fabric, the copper wire completes the LED circuit.
Challenge: When building this circuit, I couldn’t get my servo motor to rotate synchronously with my potentiometer. The problem was with how I had connected my potentiometer to Power, Ground and Analog Input.