Hacker News new | past | comments | ask | show | jobs | submit login
Buttons that morph out of the surface of the device. (tactustechnology.com)
225 points by jamesbritt on June 6, 2012 | hide | past | favorite | 88 comments



This will be huge for automotive touchscreen applications: I have no doubt it will save lives.

With current touchscreen interfaces, you rely almost solely on visual cues and feedback to make correct inputs. Not a good situation while driving, when you need as much visual attention as possible focused on the road.

The best interfaces for operating complex or dangerous machines have fixed, haptically identifiable reference points; this allows you to reach out (in the dark or without shifting visual focus) and make some sort of adjustment without fumbling or disturbing your flow. If the touchscreen can be elegantly brought into that realm, so much the better.

Tesla Model S 2.0 perhaps?

BTW, anyone have an idea of how this technology works?

Update: From the article linked below in a comment by kpozin, it's based on hydraulics. Ingenious.


Similar to kiba's comment below, but perhaps a step in between, it would make sense for car designers to look at other similarly complex machines that require the operator's constant attention to the outside world before innovating blindly.

HOTAS (Hands on Throttle and Stick) as a concept has been adopted almost exclusively in fighter aircraft - and extended into Formula 1 and Indy cars as well - for this exact purpose.

The modern car dashboard design is beyond stupid, almost criminally so when you start to see 12 inch video screens placed underneath the windscreen in such a way that the driver has to take his attention off the road to use pretty much anything in the car.

Tactile feedback isn't going to help much if you are fumbling with controls floating an arms length away between you and the passenger.

There is plenty of room on the steering wheel for all of the car's functional controls, especially if you make use of a HUD/windscreen visual menu and multifunction controls. I almost exclusively use my 5 button steering wheel control to manage my entire audio experience while driving right now.

I'm not sure how many people will need to be killed before something like that becomes the norm. Of course, seatbelts and airbags both required their fair share of human sacrifice before the car gods decided lowly operators warranted them, so maybe kiba is onto something. :-)


I very much agree that dashboard design is generally awful. As controls have become more digital and less mechanical, they seem to have come to require more attention. For instance, one general pattern used to be if you wanted warm air you slid a horizontal lever more or less to the right or left. Once you got used to it, you could set it without looking, and with gloves on.

Nowadays it's much more likely that you spin a knob or repeatedly press a button to set the temperature, and instead of the setting being indicated by a combination of position and color/text, you likely have only a numeric value, an exclusively visual indicator.

Not only does the digital, numeric value (usually in degrees F or C) provide a (possibly) unneeded level of precision, it's a visual-attention sink.

The "glass cockpit" of the Tesla I have to believe is even worse in this context. However, given its extreme versatility, I doubt the touchscreen will be going away. If a screen interface can be designed to require less attention, and I believe it can be, it's possible that we can gain all the promised flexibility of glass without it becoming a safety hazard.

Thanks for the HOTAS reference, btw. Hadn't heard of that.


The indicator controls are a good example of this. They used to mechanically click into one of 3 positions, indicate left, off and indicate right. This made it very easy to switch them off. Now when you want to manually switch off the indicator it's very easy to to go too far and start indicating in the opposite direction. The only way to know what position they're in is then to look down at the lights on the dashboard. It's seems a huge step backwards in usability.


Unless they're in a dogfight, airplanes do not require constant attention to the outside world. That's why autopilots and drones are technologies that have been available in the aerospace industry for years, but are just now getting good enough to consider in cars.

Also consider the amount of training that goes into becoming a fighter pilot, vs. learning to drive a car. Powerful interfaces are complex and require a lot of training to manage properly under stress--which is when it is most important to do it right.

That said, modern cars do have a lot of controls on the steering wheel: all the typical blinkers, brights, horn, etc. and now often radio controls, cruise control, telephone Bluetooth controls, even gear shifting paddles.

HUD has been tried in cars before, but never caught on. Drivers tended to find it more distracting than helpful. Most of the information for driving does not need to be so..."contextual", the way that, say, a missile targeting system does. Anything that intercedes itself directly between the driver's eyes and the outside world could be dangerous.

I do agree with you that the touchscreens in the dash are frickin awful UI design. I think they are like the "store" setting on LCD TVs--their main job is to look cool in the showroom.


> Unless they're in a dogfight, airplanes do not require constant attention to the outside world.

This sounds like you're not a pilot. Pretty much the first thing flight school teaches you is situational awareness. Constantly visually scanning for nearby traffic is a very, very important skill.


What about IFR? What is the automotive equivalent of IFR?


IFR is essentially the painted lines on the road - it's just a directional radio signal that you can follow to see if you're following the glideslope.

An autopilot can follow it, but even in IFR you're supposed to be visually scanning, as you can be in IFR weather while still having a mile or two of visibility around your plane.


Wait, are you a pilot? IFR (instrument flight rules) means you must be able to fly looking solely at your instruments, not out the window. There's no equivalent in driving.

I think you might be thinking of ILS.

Disclaimer: I'm not a pilot, although I did stay at a Holiday Inn last night.


My dad's a private pilot.

IFR doesn't mean no visibility, it means visibility below minimums. If you're driving in fog, you still look out the window. Same for aircraft. You may be in instrument flight rules while being able to see the runway.

You're right, though, on ILS. Part of being able to do IFR, but not IFR itself.


Dogfighting/takeoff/landing is a lot closer to driving in traffic than it is to general flying. The point being that a complete awareness of your surroundings is essential in all of these situations.

Also consider the amount of training that goes into becoming a fighter pilot, vs. learning to drive a car. Powerful interfaces are complex and require a lot of training to manage properly under stress--which is when it is most important to do it right.

This strengthens my point. If something as complex as a fighter jet can have its core functions displayed in an easy to use display in that keeps the pilot's "head up", then certainly we can do it for cars.

HUD has been tried in cars before, but never caught on. Drivers tended to find it more distracting than helpful.

That's because that "HUD" consisted of a an annoying little speedometer that was pretty much out of the line of sight. Today, we have much more information delivered to the driver in general - the most distracting being GPS/Map data and cell phone operation. These are perfect for HUD use.

I'm talking about something like this: http://www.carpages.co.uk/bmw/bmw-7-series-part-1-19-11-08.a...


That graphic is hilarious. WTF do all those cryptic numbers and arrows mean?? Even if I knew, I would have to stare at them for a second or two to collect and parse the data.

You might start by asking yourself why, in 2012, almost every car still comes with an analog (dial-style) speedometer. The answer is that they are unambiguous and easily scanned. You do not need to actually read the numbers, and can collect the speed data in a quick eye flick (well under 1 second). Again: everyone already tried digital speed readouts in the 1980s. They sucked.

Engineering and innovation create continuous pressure on good user interfaces. The Google homepage is a well-known example from the web of a UI that successfully resisted this pressure to very good effect. (Although even the Google homepage is slowly succumbing.)

Consider something like GPS. The greatest visual interface for GPS directions is none at all. The driver in need of directions should be able to ask for them verbally and receive instructions verbally--that way they can maintain their visual scan. In the age of paper maps, everyone knew it was silly to try to drive and unfold and read a map at the same time...somehow this common wisdom has been forgotten just because we can use pixels now.


> This strengthens my point. If something as complex as a fighter jet can have its core functions displayed in an easy to use display in that keeps the pilot's "head up", then certainly we can do it for cars.

Not really. Fighter jets require extensive education and training to be able to fly at all, let alone competently. They can afford to teach the pilot "click the thumb button twice for this, three times for this other thing". Car manufacturers can't.


Well, self driving cars will make the safety application moot.


I'm a big fan of the nascent self driving technology, but vehicles in the first several waves of that revolution will still need a proper human interface.

And realistically, I think "dual interface" cars will own the general market (non-taxi, etc) for a long time to come. Anyone who enjoys driving once in a while on a nice piece of road (E.G. the 280 between Daly City and Cupertino) is going to want a car that also features a manual mode.

Eventually we'll have cars where a hidden steering wheel unfolds from the dashboard and pedals rise from the floor. And parental controls to prohibit your teen from accessing them ;)


I dug this up from the uspto website.

http://www.uspto.gov/web/patents/patog/week20/OG/html/1378-3...

Go to the full text and then the images area and you get to see more detailed diagrams.


Here's a brief article about Tactus: http://www.theverge.com/2012/6/5/3064674/tactus-technology-p...

Looks like the button locations are fixed for now, but they're hoping to make them adjustable in the future.


If you could make the grid small enough, it wouldn't mater that they were fixed if you could activate regions to form whole buttons.


Many narrow independent regions could cause visibility to suffer.


The video can be found here: http://vimeo.com/43431035

I really dislike the voice-over they have, but that might be personal preference. It just sounds like a regular person imitating a professional voice actor to me.


I think maybe he's actually deliberately going for a little humor.


Agreed. The voice-over is bad enough to makes this sound fake.


Yep, shame they went for the Faux-Matrix-Morpheus voice over.


That's funny, I thought it was reminiscent of Agent Smith.


I know this is a fraud, and it's only because of the voice-over.


They have a patent. Generally at that stage fraud isn't a word that comes to mind.


I am sure the first generation is going to be fraught with problems, but this looks so good that I would badly like to own part of this company. Massive game changer, with the sort of technology I thought to be 5-10 years out. Unqualified thumbs up from me.


Same here; it seems like the range of applications is incredibly broad.


This would be a killer feature for me if the buttons had some kind of depression, so I could feel the button before actually pressing it. Without that feature it's just a bumpy touch screen which isn't all that much better than a flat one in my opinion.


Would the bubble create a non touch sensitive surface, since it is at a distance from the screen? Assuming it can be pressed flat against the glass before the touch screen beneath detects a touch, you have pretty much what you want.

There would still be touch sensitive areas between the bubbles, which might be a problem, depending on the app.


Exactly. My first reaction was "Okay, my mind is thoroughly blown." ...then my second thought was "wait, feeling around a bumpy touchscreen will just lead to more accidentally pressed keys unless they don't trigger until they're pressed down"


agreed, feedback is vital


Judging by the video, it looks like the layout of the buttons is fixed, only their presence is toggleable.

If so, that's a bit of a bummer.


A bummer. Now, that is classic.

I remember using public payphones just a little over a decade ago. One time some kids had put Vaseline on the earpiece. That was a fun time.

Now we have a phone in our pocket that we can talk to and have it do nearly anything. By next year, it will have buttons when we want and no buttons when we don't.

Vaseline on your ear is a bummer. This button-no-button stuff is not a bummer.


I think your post was tongue-in-cheek, and it is maybe one of my favorite HN posts ever. That said, I get a little bummed that there are no Back to the Future 2-style hoverboards, that I don't have a personal flying car, and that I still have to shower every day. I'd love a polymorphic phone screen, and am bummed that this display isn't that. Just out of curiosity, I rubbed some Vaseline in my ear just now, and must admit that it is more of a bummer than the lack of polymorphic phone screens.

"Everything is amazing, and nobody's happy" http://www.youtube.com/watch?v=8r1CZTLk-Gk


It's a bit of a bummer because it's trivial and boring. It almost looks like it's just a touchscreen with pockets on the front that you blow compressed air into. The fact that this is getting so much attention - and the fact that it's wasting some of our time in the process - is kind of annoying.


I feel like people like you complained when the first computers were invented because they were slow. This is haptic technology progressing and just the fact that it's getting major media attention is amazing since on-demand haptic feedback is the biggest presence missing from digital technology. Touch is just such a fundamental part of the human experience that anything that advances the technology that gets us to that implementation should be celebrated and here you are declaring it annoying?

It boggles the mind...


Electrostatic haptic feedback looks much, much, much more promising. Toshiba demo'd it two years ago: http://www.youtube.com/watch?feature=player_embedded&v=j...


That looks amazing, I wonder why it hasn't been commercialized yet. Too expensive?


This isn't a prototype computer. This is a prototype relay that can only open. If they'd given their engineers just two more months they'd have been able to show off a grid of pixels - something that would work in multiple orientations with arbitrary UI elements. Show me the full relay and an adder and I'll be happy.


I had similar sentiments as the OP.

I feel I was just humbled in about the most effective way possible.


I think you might have to go 20 years to find that kind of difference. A decade ago, at least among the techie Americans that are the typical HN reference point, we had PalmPilot smartphones. Now we have some other brand of smartphones. Soon to have magical buttons, but otherwise not that huge of a revolution. The #1 thing to do with them a decade ago was to browse Google Maps and restaurant-review sites while away from a computer. Guess what the #1 thing to do with them in 2012 is?


We definitely were NOT browsing Google maps on our smartphones a decade ago.


Good point. :) However, there were maps on the Palm, just from a different vendor. Probably MapQuest, now that I think of it. The main point is that, imo, there was a step change when it became possible to find directions/restaurant info away from a computer. Now it's a bit more convenient, but just an incremental change in comparison.

In any case, it's certainly not the case that 2002 was some ancient era where we all used payphones, at least not in the West. That's hyperbole at best.


Google Maps didn't even launch publicly until 2005.


The earliest I had Google (or any) maps on my phone was 6 years ago.


I had a phone with a terrible web browser, a tiny screen, and ridiculous charges for bandwidth before that. MapQuest likely had a terrible mobile site for it to talk to. I think it's likely that I theoretically had maps on my phone 8, 9, or even 10 years ago.

In practice though, I didn't have them until June 2007.


Maps worked fine on the Palm. Not as nice as today's maps, but they were sufficient. Also, Sprint introduced a reasonably priced uncapped plan around 2002 or 2003 or so, coinciding with the introduction of one of the Treo models, so there wasn't really a data-charges problem. In fact, the data plans got worse for a period later in the decade. There was a time in the late-2000s when Sprint re-introduced per-MB bandwidth charges, except that Palm phones were grandfathered in on the old plan, so you got unlimited data on Palm, but not on anything else.

It's true that uptake has increased over the last decade, though. Partly due to product improvements, but I think largely due to cultural changes.


Even if you couldn't have overlapping pre-determined button fields for the dialer, apps, browser, etc., just having the keyboard, even in one configuration, would still be highly functional.


You should still be able to dynamically change the symbols printed on those buttons. This still looks exciting to me. :-)


I tend to agree with you on this. It's a great idea, but not an amazing idea. And the reason is simply that we can think of what a better version would look like - polymorphic buttons.

I wonder if that's actually a good criteria for defining game changing innovation - that you can't think what a better version would look like.


I've personally been fantasizing about improved tactile touchscreen features since the iPad came out, but that didn't change the fact that it was a "game changing device". So I suspect that's not a great standard to judge by.

The fact is that every step in technology allows you to see farther ahead. To complain that the mountain you stand on isn't as tall as the one it allows you to see is ridiculous.


That's a fair point.

In my defence, I'm not complaining about it, just agreeing with the guy I replied to that I was somewhat disappointed. I was then curious why I was disappointed, and wondered whether my rationale was generalisable. It probably isn't.


This always happens with technology videos. You can project a flat image onto a transparent screen? Call it a hologram! You can create fixed buttons? Suggest that it morphs into anything.


While I agree, I do completely hope that we will figure out a way to make the buttons dynamic - may be after 1 or 2 generations of this technology.


the way I see it, physical buttons have two uses one is tactile navigation: the ability to find out what part of the screen you're touching and navigate it without vision.

the second use though is activation feedback, a button has to be able to be navigated separately from its activation. unless these protrusions can tell whether they are pressed or not, it still seems useless to have a lumpy touchscreen...

so I really hope they found some method of separating simply sliding a finger around searching for a button and realizing when the finger is pressing down... this is the exact same issue as the various touch-mice devices

(left click with finger on left side, right click with finger on right side, unless that is you like to REST your fingers while clicking)


From the site:

"While touchscreens provide a versatile user experience, they provide no tactile experience for consumers. Vibration haptics and similar solutions try to simulate a sensation of touch, but all are "feedback" technologies, vibrating only after touching the screen (even if they are touched in the wrong place or by mistake). In contrast, Tactus' technology creates real, physical buttons, where users can rest their fingers on the buttons, as on a mechanical keyboard, and input data by pressing down on the keys. Tactus is the only solution to both "orientation" and "confirmation" problems that are inherent in touch screens."


Braille. Can you imagine the world of potential here for our low-vision friends? Dynamic text for signs (seen through something like Google's glass), down to topographical layout of the ground in front of you for navigation... This could really shake things up.


Site hosted on Yahoo and instantly overloaded. This is like reading Slashdot in 1998.


Prediction: Apple buys the company and the proceeds to get every patent they can on this tech.


Unfortunately for them that won't be a lot of patents that would stand up under examination since there's been heavy research in haptic interfaces, and especially haptic touchscreen interfaces for quite a while now.

In fact I would be surprised if Apple wasn't doing research into this themselves.


Apple filed a patent in March 2012 for a haptic touchsceen. http://www.appleinsider.com/articles/12/03/22/apples_haptic_...


Absolutely. And then reveal the innovation through some iPhone 6s or something in between?

At least that's what they did with Siri.


I don't see any info on how it works? the website is just justifications for tactile interaction


Here is very similar technology: http://www.chrisharrison.net/index.php/Research/PneumaticDis...

Nokia has also done research into this but with little bit different technology: http://c2499022.cdn.cloudfiles.rackspacecloud.com/wp-content...


Could you take little sections of this stuff, stack them together, and use them as a sort of programmable matter type thing?

Maybe if you fold them over or something they could just basically expand and contract. So you expand a bunch in one section to make a feature. Then wrap the whole thing in a very flexible skin. Press a button on a computer, and it morphs between (for example) an alligator shape and a person shape.


Or it would be cool if you could make a ball shape with a solar power source inside, and it could push itself around and collect energy. It could turn on and off rows of "buttons" to give itself momentum.

Throw in a camera for object tracking; or a bright light - you could make little glowing orbs that roll around, float in water etc.

Throw a bunch over a wall - they move around and map rooms as a swarm.


Very interesting and a bit scary. Think about it, a screen that can reach out and change its shape. That almost guarantees some (software) application misuse if they are given access to screen control. But on the other hand, gaming would be more interesting with something more to interact with than just what is displayed on screen.

It will be interesting to see what becomes of this.


I expect a rogue app, that, upon detecting that phone is laying with its display down, starts to move the phone by imitating snake's muscles. and phone runs for the exit and beyond!


I'm afraid it may take a little longer before my phone can safely climb off a table, much less those 5 flights of stairs before it reaches the street...

Still, the idea of a million smartphones collectively climbing out of their nightstands at full moon and then crawling towards some supervillain's cave, as fast as their little bubblescreens will carry, is definitely hollywood material!


I'll bet one of the first apps to take advantage of this will be a back massager app.


That will probably be the second. The first will undoubtedly involve a penis of some sort/shape.


This is cool, but is there really a demand for this? Didn't we almost universally switch from textured, real buttons to glass interfaces over the last 5 years?

Maybe it's just me, but I'm not exactly pining for my old blackberry's keyboard. My display/interface iPhone isn't perfect, but this just isn't a real enough problem for me. Is it for you?


Glass is preferable because it can be reconfigured by software to fit the application, allowing dead-simple, focused interfaces rather than bloated, inflexible catch-all input devices.

The tradeoff was the loss of tactile feedback, meaning good luck dialing a number, choosing a song, etc. while driving. Tactile glass would be the best of both worlds.


And not just driving. Take the iPad, for instance - it's fun and kind of magical, but a lot of us dismiss it because you can't really do "serious" input on it. Even though I really like it, I have to confess that I can't really type that fast, nor is typing comfortable enough to do for very long. But if typing on my iPad were as easy as it is on a chiclet-type keyboard - just a step or two down from my laptop keyboard - then suddenly things would equalize a bit. And what is for many people the major barrier to using a touch device exclusively would disappear.


I don't want to look at the screen all the time, same way that when I type on a real keyboard I don't usually look at the keys (or even the screen some of the time). On tablets, i'd like the feel of a keyboard some of the time (without losing the real estate when I'm not typing), but I'd also really like tactile feedback for music applications, scrolling and stuff like that.


What is really exciting about this is that it could potentially be used as an ad hoc surface generator. A basic idea would be to have physical keys on the variety of piano apps available for iOS. An extreme example would be having terrain pop up of the screen in games like "the bard's tale", providing tactile knowledge of the game levels.

There isn't much information on the actual implementation on the website, but if you could make the surface depress-able or not depending on the context requirement, then the variety of applications is endless: Think of the blind, being able to use tablets to read with Braille(non depress-able), or the piano idea from above( depress-able).



It is funny to see this because a few years ago in college I mused with the idea of using bubbles for buttons on screens and drew up some plans I had. I didn't have the tech or the time to bring it about, but I am glad someone else has. The issues I came up with was how to build a small pneumatic motor that would fit in a device, not make much noise, work quick, and not drain all the power fast. I hope they can do it becasue tactile feedback would be nice. =)


Am I the only one thinking Heart of Gold(Hitchhiker's guide to the galaxiy)?? P.S: I couldn't really reach the page. just the description evokes that memory


This would be handy to adjust the volume or pause the currently playing track on your smartphone. It always kinda annoyed me that I had to take it out of my pocket to do that.

It's a cool technology, but i think it won't be more useful or productive than a regular touch screen. How are you gonna swipe on a bumpy touch screen?


I have been thinking about this for a while - great to see someone company is actually taking initiative to really build this! I think this will definitely help to bridge the gap between buttoned and touch phones - will tremendously improve your keyboard accuracy on a phone if nothing else.


A lot of the sub pages don't seem to have <title> elements. That seems like an odd oversight.


Microsoft - a company that is making such a big push with touch in Windows 8 should be the most interested in this company.

Tactile touch has the ability to transform tablets from casual and convenience devices to the next generation of computers and MS is already betting on touch.


Man, I knew this was coming. I just thought it would be like 5 years from now.


The website appears to be down. Who uses Yahoo for hosting?


It kills me how they synced the beats of the intro music to the "morphing" of the buttons...


why is everybody ranting on this nice technology and ignores the fact that the sites image and css ressources aren't loading?!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: