Hacker News new | past | comments | ask | show | jobs | submit login
GuriVR – Describe your VR experience and the editor will do the rest (gurivr.com)
121 points by bpierre on Nov 23, 2016 | hide | past | favorite | 49 comments



Hi, GuriVR author here. Let me know if you have any questions :)


Just nitpicking: you use the Spanish flag to switch languages, but the grammatical person and verb conjugation you use are generally not currently used in Spain.


If the Mexican flag had been used instead, other people would have complained about that usage.


I'm argentinian and it's a challenge LOL Next design says the word "spanish"


Just wondering: if I ran two instances of your application, would I have "guríes" or "gurises"? :-)


I'd say gurises, since Guri comes from `kid` in uruguay


Lo' chiquiline'


What direction are you planning to take this project?

I see a lot of potential to create and edit virtual worlds on the fly using things like NLP, big data, proc-gen, computer vision, etc. then expanding that into mixed reality.


I'm currently working in the journalism world so taking a lot of feedback on how to tell compelling stories with the tool.

In the other hand I'm playing with other kind of input, such as speech recognition, for in-vr authoring.


is there a way to see what others have made?


Not yet but I plan to add it


VR version of WordsEye? http://www.wordseye.com/


Kind of, I saw wordseye the first time after starting building guri (and I love it). I guess my nlp is much worse but in the other hand Guri has some capabilities built-in to the editor for searching resources. Also the AR-mode in Guri expands what you can do with it


Clever idea. But scrolling the little VR scene on the right with my mouse is backwards. When I click and drag to the right, the camera pans to the right. But since I'm clicking on the content and dragging, it feels like the content should pan to the right, meaning the camera should pan to the left.


This is impossible to get right. If they had done it the other way around, other people would be complaining.

Source: I've done it both ways for several apps and people always complain, either way.



If people complain about the other way, it's just because they're used to things doing it wrong. When you're dragging something, the thing you're dragging stays under your mouse. Here you're clicking and dragging on the scene, so the scene should stay under your mouse, but it doesn't. It flies in the opposite direction.


It depends on whether you consider it to be sitting in a stationary position and dragging the environment around or sitting in a fixed environment and dragging yourself around. Largely personal preference, but if you're expecting one and get the other it's pretty disorienting.

Having to click and drag in the opposite direction is pretty weird though. First person shooter style "move mouse left to look left / move environment right" is just mouse movements. You're not grabbing something to drag it, so there's no clicks.

Tough to translate that into a windowed thing where you can't do fullscreen mouse ownership like shooters do.


If I get the grabbing cursor when I press the mouse, I expect the dragging logic. That actually would be good way to make it unambiguous but here it's used wrong.


That's a really good point. Regardless of which style of dragging someone prefers, the grabbing cursor is a clear indication to expect "natural" style dragging.

After all, when you press the button, the fingers close as if they are grabbing something in the scene. In real life, when you grab something and move it, it goes in the direction you move, not the opposite direction.

Someone correct me if I'm wrong, but I believe in large part it's gamers who tend to expect the reverse dragging. I think at this point far more people expect natural dragging as used in most online maps and street views, etc.

360cities used to have the reverse dragging, but I see they have changed it to natural dragging now. Try the panorama on their home page for example:

http://www.360cities.net/

But the other thing I was told about gamers is that they don't expect to drag with the left button, but with the right button.

This suggests a possible solution: a left-button drag can use natural dragging, and a right-button drag can use reverse dragging.

The finishing touch would be to have the cursor change to something other than the closed hand when the right button is down. I'm not sure what a suitable cursor would be, but there must be something - anything other than the closed hand. Maybe a four-way arrow?


While WebVR scenes can be viewed in a browser, their intended target is the vr headset, where the "dragging" is not at issue, and you are turning the view by turning your head. I think it would make more sense to keep the reverse dragging and change the mouse icon here. The intended experience is not "here is a scene that you can drag around" but "here is a virtual scene you are standing in and you can look left and right". So the intended context is that, headset or not, you are controlling the viewers head.


If you want camera-drag with right-click, ideally you'd just hide the mouse (and capture it so it can't leave the bounds of the view). That's normally what games do when they have right-click dragging in the fashion you describe.

That said, I have no idea if JavaScript can capture the mouse like that.


It can! For a capture that lasts until the mouse button is released:

https://developer.mozilla.org/en-US/docs/Web/API/Element/set...

Or for a capture that works even if the mouse button is not pressed, and remains until the Esc key is pressed:

https://developer.mozilla.org/en-US/docs/Web/API/Pointer_Loc...

The first option would be the one to use for right button dragging.


The first one is Firefox only, the second one is everything-except-Safari.


Isn't this just "nurture"? I am used to dragging the camera, but you are used to dragging the scene?

What's right?


But you are in 3D space, hence your view rotating in the direction that you drag the mouse (think first person shooter, you wouldn't invert controls for that).


In an FPS you don't hold the mouse button down to change your camera. In an FPS the cursor _is_ the camera. But here, you're literally clicking and dragging, and dragging actions always go the other way.


I agree. If you use the mouse to point, then the camera should move in the direction the mouse moves. If you use the mouse to grab the world and move it, then the world should move in the direction the mouse moves.


A-Frame defaults the desktop drag (which acts as a preview for VR experiences) to dragging the camera. For static scenes such as panorama, it can feel weird, but A-Frame does have options now to reverse the drag direction. But when you're building 3D scenes and using WASD, it feels more natural to drag the camera.

Pointer lock should be done, but we count the desktop drag as a non-invasive preview mode, but experiences can implement pointer lock if they need.


But since there's a "hand" pointer, it seems more logical that it would drag whatever it is holding, in this case the 'scene'


While we're discussing the dragging behavior, you also can't click on the bottom ~1/3 of the pane, seemingly because of the little cardboard/vr goggle graphic in the bottom corner.


You can use this link to log in without signing up an email address

https://gurivr.com/stories/?token=PUraxsVQKdPcmD3wKd9uNC&uid...


Neat. Built on a-frame, which is built on ThreeJS which uses WebGL/WebVR.


We tried threejs (and babylonjs as well) for a WebVR project targeting mobile devices a few months ago. Suffice to say both are extremely unoptimized and crippled from the ground up with architectures easily wasting at least half the CPU every single frame.

They can both be made orders of magnitude faster simply by focusing on cache misses, branch predictions and data packing. But I don't see it happening without full rewrites in both cases.

We ended up writing a simple proprietary WebGL/WebVR renderer from the ground up - our scenes dropped from 18-20ms of CPU time to a very stable 0ms (and we made them much bigger as well.) Even the GPU time saw a big performance boost because we were using normalized integers instead of floats to reduce the vertex stream's bandwidth.

They're both good libraries to learn 3D and toy around with, but they're clearly aimed at beginner programmers with no or next to no experience in 3D graphics programming. Such a requirement goes directly against raw performance which requires loads of non-beginner friendly techniques.


Please contact me at ben@exocortex.com so we can chat please. I am a large contributor to three.js and use it extensively. I would appreciate understanding what you feel are needs for a rewrite. You seek like you know what you are taking about.


Just checked out exocortex.com, have you taken any look at the Hololens? For instance, Vuforia is working on selling catalog items using 3d models built in Unity [0], but perhaps you could use HolographicJS as another target once it matures [1].

I'm working in AR in a different space but as a longtime JS developer I would love to see this all come together.

[0]: https://www.youtube.com/watch?v=U6CpD4hVmqc&feature=youtu.be...

[1]: https://github.com/lwansbrough/HolographicJS


Sure thing! I'll drop you a mail before leaving work today in a few hours.


I don't know. There are lots of really great things being made with Three.js. If you couldn't get it to work, it says more about your lack of experience with Three.js than it does about the unsuitability of Three.js.


I could get both threejs and babylonjs to work without a single issue; anyone with even the slightest of experience on game engines can do so effortlessly in minutes. Yet no amount of experience with either library will make their performance issues go away.

Have you profiled any of them? They take at least an order of magnitude more CPU than required in the best of cases, not even counting wasted GPU bandwidth/cycles. Heck, babylon.js sets its active texture unit by string concatenation - I can't think of a less efficient way to do it.

Have you seen any other graphics engines to add weight to what you're saying? I spent a full week in both project's codebases to try and optimize them before giving up; like I said, both architectures cripple performance from the ground up. They're very naive implementations of very old ideas that haven't been used in game engines for at least a decade.

Don't be so quick to imply a lack of experience; it very well might just be the Dunning-Kruger effect at work.


Hey, Kevin from the Mozilla VR team here.

Perhaps targeting mobile is a different story, mobile browsers are constrained. We built A-Frame on top of three.js, and we're able to build compelling 90 FPS room scale VR experiences running with the Vive in the browser. Perhaps it could be optimized, but it's worked wonderfully for our use cases. https://blog.mozvr.com/a-painter/


That matches our experience as well; desktop computers have no problems running VR in the browser. We couldn't get above 50FPS for the mobile devices we targeted (the servicing contract we had was for a mobile experience only.)

This is when we decided to write a custom engine tailored specifically for our needs based on my experience working with AAA game engines (I even shipped a title on the PSVita so had intimate knowledge of PowerVR and ARM architectures). And as I said in my original post, it is definitely not beginner friendly - while three and babylon both are.

It took about two weeks to write the core tech and it can only do what this specific project required. If not for performance issues on last-gen mobiles we'd never have rolled our own.

Desktop computers are incredibly fast - they can easily waste 90% of their CPU and still yield a lighting fast VR experience (for small scenes at least - less than a hundred draw calls). Mobile not so much.


I did not know of WebVR. Reminds me a bit of VRML. I really hope to see this having a higher adoption. e.g: a product like Google Street View moving to it.


Super creative use of A-frame, nice job. I spent the last month or so building a VR project and I found A-frame itself is still very early on and has a number of quirks, any particular issues you ran into?


- Autoplaying video/audio is a PITA. If you use video or audio in Guri you'll see a play button. It seems iOS will eventually remove the restriction...

- CORS is a hard restriction for this tool. That's why I built the uploader into the editor. If you plan to do your own experience and can avoid it having your assets in the same domain on a CORS-enabled cdn please do

This being said, the A-frame community is the most welcoming I've ever seen


If it doesn't work, try disabling Privacy Badger or similar addon. Solved it for me.


No Pokemon option?



Not yet :'( but you can import some pokemons from clara.io :P


excellent!




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: