Give it about a month and Chrome for Android will provide web audio API support. It's currently in chrome for android beta
Our web app that plays audio in sync on multiple devices works in iOS, Chrome (non mobile) & Firefox Aurora. Were eagerly awaiting for the newest Chrome for Android to drop.
That's good news! What I really want though is to run it inside a WebView rather than just a mobile web app. I feel this gives more control over the experience, screen real estate, monetization options, and deeper integration with the device's hardware features.
It would, but the default browser is what WebViews are based on. So you give up a lot of power and flexibility by just targeting Firefox or Chrome for Android.
hey everyone, I didn't see that this had been posted until now—I wrote this as a fun little experiment with audio in the browser. The main hack here was to generate .wav files at the byte level directly in JavaScript by using data URIs. If you want better quality sounds, I suggest using the actual audio APIs that different browsers have exposed. Also, I think the interface came out pretty neat and the CSS for the keys is pretty nice :)
I've verified this with my Casio keyboard, and the webpage can receive events when I play. It can also play audio out through my Casio's speakers.
This will open up a whole new type of website, where you can plug in your instrument and jam with friends, or learn along with a tutorial visualization or video.
> This will open up a whole new type of website, where you can plug in your instrument and jam with friends
Even replacing audio with MIDI, the network latency imposed by even the speed of light is too great for simultaneous jamming with friends in other places. Heck, it's even an issue for marching bands -- the difference between the speeds of light and sound across even a football field requires special attention in order to keep things sounding together for the people in the stands.
The only good solution I've seen for this is NINJAM[0], a protocol for online jamming where, instead of trying to fight latency, the creators changed the problem. By enforcing a uniform tempo and number of beats per phrase (inside of which a repeating chord progression must fit), people can jam together by playing to what the other people played n beats ago, and everybody else will hear you n beats late.
I started working on a browser-based client for this protocol a while ago[1], though I unfortunately put it on the back burner before figuring out proper vorbis encoding/decoding in JS. A MIDI adaptation of this could prove to be quite fun, and I think my client (if ever finished) could actually benefit a bit from some MIDI integrations already present in some existing clients as well. Thanks for bringing it to my attention.
That is awesome! Despite all the fun I had making this "html5 piano", for electronic pianos, you really can't beat having an actual midi device with weighted keys. I’m looking forward to all the future awesomeness with audio and web—its current state is really frustrating, but it keeps slowly getting better.
It would be cool if there was a way to convert piano sheet music to QWERTY so you could play piano on a QWERTY piano (if a good QWERTY piano actually exists). Then everyone could play piano!
That’s a fun idea. I directly used the existing keyboard mapping that garageband employs for keyboard to piano in their app, which tries its best to mimic a piano.
There are lots of things that make actual QWERTY piano difficult, e.g., fewer keys than notes on a piano, hitting complex chords would be very difficult (regular typing doesn't train you to type entire words in a single combined press of keys), computer keyboards have no benefits of weighted keys—but perhaps at least one note a time melodies could be played pretty proficiently by non-musicians with a QWERTY player?
I understand the sound data are generated by JavaScript. How many distinct sounds are generated? How much space is required to store this data? Are the notes of a chord combined by JavaScript or the browser?
Each note is generated as a separate sound file when the piano first loads. They're all short wave files. So when you hit a bunch of notes at once, it's just playing several sound files at once.
If you click the square in the upper right, you'll get options to change the waveform and volume response. There are some other options too, like shifting by an octave or changing the color of the piano :)
Better things could be done by using the web audio APIs, but I explicitly started this project wanting to build audio using data URIs.
A few years ago myself and a friend wrote a simulator for a 90's toy called "Smart Start" ("Pense Bem" in Brazil), which has a crude piano (square wave only) mode as well.
I've been working on something similar at http://thrusong.com- you can plot out your notes on a grid and generate a studio quality riff with any VST instrument plugin. It's built for social songwriting.
This was great to mess with on my Chromebook Pixel's touch screen! Just makes me wish there were a way to force multitouch instead of two finger taps always brings up the context menu.
It's cool sites like this that remind me why I still use a PS/2 Keyboard. Anybody else out there push all the keys at once just to make sure it would work?
I wrote this web app in May 2012—I don’t think iOS was there yet? I’m open to a pull request though https://github.com/mrcoles/javascript-piano but since the project was mainly a test for generating different wave files as dataURIs (and the piano was an afterthought that was actually spawned from this half-baked idea http://mrcoles.com/media/test/guy/ ), idk if the dataURI approach is any easier on iOS yet (or ever will be)?
are you generatimg the samples on each key press or when the piano loads? audio api or base65 url encoded wave files? I had made a synthesizer a while ago, but could never figure out how to keep the audio from clicking
hrm… maybe my wording wasn’t as clear as I had hoped. It generates the audio ahead of time (as base64 dataURI wave files)—the on-the-fly part is that it’s generated in JavaScript each time you make changes to the settings, like wave-form and volume response (click on the square in the top right or type "?" to see changeable options).
I assume the audio APIs would make less click-y sounds (and don't suggest manually created wave files as a solid way to create perfect sounds), but I also sort of like the clicks. When I presented this at Hack and Tell NYC, I claimed that you had to play it in Firefox 12 to get hear the exact clicks that were originally intended in the product ;?j
try some of the other styles and volume responses by clicking on the square in the top right or hitting "?"—to be fair, everything sounds a lot more synth than real… but that wasn't my goal in building this project.
Would be so cool to create a full 88 keys digital piano on a flat piece of glass with a touch layer. No edges, but for the well rounded toughened glass with see through.
Simulated weighted keys using software, have modes (Strings is a must!), basic metronome, equalizer, controls and additional synth effects; and well on top of that a custom built browser to suit the window size of 58.8 x 12.8 x 22.2 inches! :D
It would be cool, and it may be ok for certain kinds of playing, but a subtle mordent, a tasteful triplet or a good arpeggio all depend on feedback from the keys. I think this is more than muscle memory and that "simulating" the sound will never do, a mordent in particular depends on a very mechanical relationship between adjacent keys rising and falling.
There's a reason most band pianists haul around electric pianos with heavy weighted keys rather than cheaper keyboards that contain the same sound synthesis engine.
They were trained on this type of gear, so they are most expressive with it. A lifetime of muscle memory can't be undone. What about children who are just beginning today? I'm looking forward to a new generation of musicians whose native instrument is the touch panel. It seems odd to interface with modern technology using an 18th century physical interface. Its neat to watch Deadmau5 "play" his setup. I can only image what children born today will be doing when they are in their 20's.
The old physical interfaces aren’t surpassed yet. Physical interfaces have the advantage of sending feedback through touch. Piano keys that push up harder when you press harder help you notice how hard you’re playing, and keys that depress discretely help you aim your fingers away from the lines between keys so you are less likely to miss.
Of course, the ideal situation is a mix of electronic devices’ flexibility and physical keyboards’ feedback. That could be something like a touch screen that can raise and lower physical bumps on it to feel like keys, or vibrate with different strengths in different places to provide feedback to each finger. But we don’t have that technology yet.
Wow, I’ve met the creator of that Kickstarter before. I went to a free concert at Temple University where he demoed his “magnetic resonator piano” instrument (http://music.ece.drexel.edu/research/mrp), which is kind of a more-limited acoustic version of the device in that Kickstarter. From talking to him after the concert and seeing what he’s produced, I don’t doubt that that Kickstarter is legit.
Anecdotal: I understand that children who have learned to play on electronic (physical) keyboards (e.g. Yamaha stage piano) have problems when they encounter the mechanical actions on uprights and grands.
What will happen is a slow change in playing style, so things will sound different. Sort of like the change from arch to Torte bow for violins, only more radical.
I grew up playing the piano, then went on hiatus for two or three years, then took a year of harpsichord lessons (harpsichords have significantly lighter action. And narrower keys.) Now playing a piano is downright painful--I'm no longer used to the finger and wrist strength required to get a key down, nor am I used to the stretching required to hit larger intervals.
The plural of anecdote is not "data", but I, for one, will no longer understate the difficulty in transitioning to a new type of action.
I agree. It cannot and probably should not be considered as a replacement of the standard equipment; rather could lead to invention of a new sound/pattern itself given that it doesn't have the same mechanical nuances (limitations) of the standard equipment. Sort of like how the electric guitar doesn't replace an acoustic or the electric drum doesn't quite replace the - well the purple monster.
And then with these web-capable devices we can end up building a web of sounds! Have applications talk on notes, coherence, resonance -- hyper speech/sound transfer protocol (hstp://)? #weekendimaginations :D
(Of course, I know, this is all according to standards and all. But still.)