Interactive and augmented environments are far more exciting to me than the promises of VR. Why escape my life when I can make my lived world better?
The Dynamicland stuff is particularly fun due to the promise of accessibility. A mature version of this vision would be tremendously exciting for me as a high school teacher.
When I read about VR in education I mostly see "experiences" as a defining value of the platform. This is fine, but less interesting than the potential for augmented environments to enhance my nuts-and-bolts professional practice. I want to be able to create personalized tools for interacting with media and ideas (extant and live creations) and managing interpersonal communications/interactions in my room. I can see immediate value in the ability to build branching lesson plans with trigger-able subroutines to provide more individually appropriate experiences to students. I feel strongly about non-linear learning opportunities, but they're logistically difficult. Likewise, to create modifiable one-to-one/one-to-some/one-to-many communications channels to smooth the experience of working with 30-35 people when I can only be in one place at a time (and to mollify the anxieties/anger/angst of being a teenager expected to publicly perform in a crowd).
There are any number of small tools I would like to build. I know what problems I want to fix to improve my efficacy, and I don't need polished consumer products to do that.
If "Hypercard for the World" allowed me to start hacking together useful tools for myself, I would shower them with love.
"Why escape my life when I can make my lived world better?"
Some people could really enjoy escaping their lived world. Imagine if the costs on VR came down enough so that a person living in a slum in Calcutta or Rio de Janeiro or Chicago could walk through the Taj Mahal, or Petra, or sit in a Harvard University lecture hall? What if they could just make the smell go away for an hour?
It would greatly improve the quality of their life even without solving the much tougher challenge of getting them out of poverty.
Considered expansively enough, virtual reality for all five senses could be a substitute for almost any consumption experience. People could eat protein paste and have it taste like filet mignon. They could walk between the same two rooms in their house but feel like they were in Versailles. VR could massively reduce the global demand for energy and materials.
A person's interactions with the world are not "consumption experiences." This sort of VR future would turn humanity into ghosts -- haunting a virtual world, unable to truly inhabit it or exercise any degree of agency over it.
I'm eagerly awaiting social VR. Since playing multiplayer games allows people of now to have unparalleled social interaction and insight into other person, being able to freely explore virtual worlds together would greatly increase integrity in society, even if at very least teaching various social skills.
Yeah, there's no reason humans can't interact with the physical world through auxiliary VR worlds. We already do this with websites, VR will eventually be an interface to interact as a connected robotic device.
At the same time, the person in VR would be giving learnable information to the robot while they are controlling it (e.g. like a driver for a self-driving car). Eventually, you could wade the robot off of VR and it would be able to carry out the task. So, VR could also be a way to teach robots.
I think I've seen this in a movie before, perhaps we could even harvest their body heat for energy and hope they don't come up with a prophecy for a "one" to set them free.
When I read about VR in education I mostly see "experiences" as a defining value of the platform. This is fine, but less interesting than the potential for augmented environments to enhance my nuts-and-bolts professional practice
It seems like this goes beyond VR, and is reflective of the design mindset in tech at large. Creating canned, linear, self-contained, one-size-fits-all flows instead of giving users tools that they can combine/contrast/etc into more powerful abstractions that fit their individual use cases.
Which is disappointing. I either swim in the canned flows or learn to really wrench on code. How many nights and weekends is it worth to be able to create my own tools? Is that even a realistic goal?
What is the author referring to by "Hypercard in the world"? Does he mean an IoT version of Apple's old HyperCard system? I didn't quite understand the tech stack behind Laser Socks.
> The theme was to make hybrid physical/digital games using a prototype research system Bret Victor and Robert Ochshorn had made called Hypercard in the World. This system was like an operating system for an entire room — it connected cameras, projectors, computers, databases, and laser pointers throughout the lab to let people write programs that would magically add projected graphics and interactivity to physical objects.
In the same way Smalltalk was about virtual objects computing and communicating with each other through a common environment, Hypercard in the World was about physical objects computing and communicating with each other. The next iteration of the system is even called Realtalk as a play on "Smalltalk" but for the physical world.
There were lots of other demos made in this system, but those will be published at some later point...maybe.
> I guess I have a futuro-nostalgic attachment to looking people in the eye when we work together.
This is tangential, but as a blind person [1], I wonder if you would be uncomfortable working together in person with someone who can't make eye contact and doesn't even know that you're looking at them. I don't mean to be confrontational; I'm just curious. Eye contact has always been an abstract concept for me, and somehow, I never thought about how it might actually be important in a working relationship until I read your comment. Have you, by chance, ever worked with a blind person?
[1]: Actually, low vision (I can read text up close, and my limited sight helps some with orientation and mobility), but I can't make eye contact, and don't really understand the significance of it.
I have worked with blind coworkers, and they're still making "eye contact", if I recall correctly (I work remote and it's been a while since I last met them). They tend to turn their head a certain way which shows you when they're listening to you.
All eye contact is is a signal that the person's attention is on you.
Thanks for asking. No - I am not uncomfortable talking or working with blind people.
My point about looking someone in the eye was a synecdoche for the more generally the value I feel in-person interaction has that could be lost if I and my students were using VR goggles.
My ability to "read" students is a big part of my professional practice.
I think there's a broader picture here than our personal preferences about our ideal situations.
Consider software development teams. It's probably optimal for the team to be all co-located so they can communicate in person. But the internet today enables dispersed teams (that otherwise wouldn't be able to work together, e.g. for financial reasons) to pretty effectively collaborate on software.
The internet is providing additional options, that provides additional benefit. VR could provide collaborative benefits along these lines.
I’m not sure about suggestions but I was thinking about Juanita when you talked about wanting to look people in the eye. In snow crash she was obsessed with facial expressions and the need for just what you’re describing for real human experiences. Having said that though I also recommend anything by William Gibson in terms of VR/AR.
VR for connecting real world spaces across distances, so well to the point that our brains cannot distinguish the "V" part of it, to me has the most value.
Here's where the 'pointing toward the future of computing' bit comes in:
Similarly, Laser Socks is a fun demo by itself, but what we as researchers hope it points toward is a new type of computing that instead of isolating humans in artificial digital worlds, provides a medium of expression that is continuous and integrated with our physical and social worlds
So this is one 'application' built on an 'OS' developed by Bret Victor and Robert Ochshorn which was called Hypercard in the World, though the latest version is called Realtalk. More info here: https://limn.it/utopian-hacks/
Wow, that link. What an insulting, self-important piece of drivel. It casually asserts that everyone who isn't Alan Kay or Bret Victor is dead-eyed, soulless drone. That all the work done in tech since Xerox PARC has been a series of criminal tragedies. Hope this guy scoots off elsewhere soon, to write some more fantasy time "ethnographies"
An amazing example of the possibilities is the UCDavis Augmented Reality Sandbox, which uses a projector, an Xbox kinect, and sandbox that lets you wiggle your fingers to create rain, dig rivers and lakes, and see how water flows through an environment. My kids and I were blown away by it at a science festival last year. It's all open source and you could build your own for less than $500 I think:
I can see this being interesting in traffic applications. Instead of the red/yellow/green traffic lights we have at intersections today, instead imagine a "laser counter" system that would "tag" various vehicles as they approached and would send a special signal to the car whose turn it was to go. The car, being self driving, would be integrated into all of this and then proceed, assuming the other cars follow the same contract. All it takes is one car to not understand the contract for this to be pretty scary though. Maybe the road could be tolled: the spike-track or barrier won't go down unless your car has proven that it can cooperate in the intersection protocol.
> Players try to point a laser pointer at their opponent's socks while dodging their opponent's laser. Whenever they score a hit, the health meter closest to their opponent's play area fills up with blue light. Whoever gets their opponent's meter to fill up first wins.
The light on the ground is a piece of posterboard is that has graphics projection-mapped onto it to signify a person's "health" in the game. It would be cool if there were a physically-actuated health bar like you're describing, though!
The module that detected laser dots was like a page of code. It really wasn't complicated at all.
I like the idea though, I mean a smart home deal, making everyday boring objects cooler, assuming you also have Augmented reality.
I thought it was like a back light panel of a computer display pretty cool it's a projection if I read what you said right.
Yeah I wasn't 100% serious of what I said, it's like I don't know if this is a good analogy "yeah it's easy with the push of my foot, I can turn oil into 60 mph" with regard to a car.
This is hilarious. Congrats! It's a pretty big leap to discuss the future of computing but I welcome a world of social interaction, laughing and playing games in our socks. I wanna play this.
I've seen many installations and games that have physical interactivity.
What is special in this game? Is the system that they are using? Any link with more info?
Laser Socks wasn't a special-purpose installation built for demo purposes. It was a game that normal people could build in their living room in under an hour given the right "operating system". It's one instance of a much bigger idea — that one day everyone could be authoring dynamic/physical content in this new medium just like the written word is used today.
off-topic: there was a text based game featured on here some time ago. It was black and white, and the goal was to farm more and more energy and eventually screen would black out and system would reset. Does anyone remember the name of that game?
It sounds similar to Spaceplan: http://www.crazygames.com/game/spaceplan - Not quite black and white, not quite text based, but similar. I loved it, and was surprised to find that there's an extended edition available on steam: store.steampowered.com/app/616110/SPACEPLAN/
The Dynamicland stuff is particularly fun due to the promise of accessibility. A mature version of this vision would be tremendously exciting for me as a high school teacher.
When I read about VR in education I mostly see "experiences" as a defining value of the platform. This is fine, but less interesting than the potential for augmented environments to enhance my nuts-and-bolts professional practice. I want to be able to create personalized tools for interacting with media and ideas (extant and live creations) and managing interpersonal communications/interactions in my room. I can see immediate value in the ability to build branching lesson plans with trigger-able subroutines to provide more individually appropriate experiences to students. I feel strongly about non-linear learning opportunities, but they're logistically difficult. Likewise, to create modifiable one-to-one/one-to-some/one-to-many communications channels to smooth the experience of working with 30-35 people when I can only be in one place at a time (and to mollify the anxieties/anger/angst of being a teenager expected to publicly perform in a crowd).
There are any number of small tools I would like to build. I know what problems I want to fix to improve my efficacy, and I don't need polished consumer products to do that.
If "Hypercard for the World" allowed me to start hacking together useful tools for myself, I would shower them with love.