Hacker News new | past | comments | ask | show | jobs | submit | netinstructions's comments login

How about a battery electric cargo bicycle? Like a Tern GSD ($$$) or Radwagon ($)?

The tern gsd can carry 180 kg of cargo and has detachable batteries, so that at least gives you the option of bringing extra batteries for long range. Or haul people or wheat.


I'm a huge fan of the concept of super-lightweight EVs. I still think something like the ELF [https://organictransit.com/] is one of the best ideas in this space, and I hope someone takes over the brand or concept and figures out how to market it better.

This needs robust offerings for both lower-end and higher-end models, but it's hard to say which should be initially focused on these days. My instinct is to go for the low end & maximize volume. There has to be a sweet spot between usefulness and affordability. As has been pointed out elsewhere in this thread, designing for modular upgrades would probably help.


I already like the sound of the GSD — Get Stuff Done, apparently.


because it's just grabbing the first three paragraphs that anyone can see before the paywall...


I failed (ran out of time) in one of the problems during Challenge 1 during manual play. The physics seem a little wonky to me. It's easy to miss running at a ball, the floor is slippery and takes a long time to reorient and build up speed.

Guess I am not an animal.


They may be lower cost than the status quo, but $3199 to $3699 is out of my reach as a (non professional) hobbyist. I hope maybe one day though!

https://sites.google.com/view/roboicsbenchmarks/getting-star...

https://www.trossenrobotics.com/d-kitty.aspx

https://www.trossenrobotics.com/d-claw.aspx

Not quite in the realm like the $50 to $90 'Google Voice / Google Vision AI' kits

https://aiyprojects.withgoogle.com/


You’re not the intended audience. This is meant for small research labs that are just starting up and want to enter the RL/Robotics research space and can’t afford a $400,000 PR2 or $150,000 shadow hand.


Totally worth the money :)

https://youtu.be/c3Cq0sy4TBs


"The Proud Robot", Henry Kuttner, Astounding 1943

https://www.prosperosisle.org/spip.php?article863


You can buy a meca500 for 15k USD. If you are just doing learning stuff, it should be more than adequate.


How do you know these prices? I generally find these companies do not publish them on their websites


Hardly secret around institutions that have them.


The main cost is the actuators. The servos they use are the only ones out there with integrated feed back from the sensors which is required so they can calculate hardware safety. They also have, albeit pretty bad, torque control which is necessary for many approaches to controlling legged robots and manipulators.


According to a Chinese SCARA motor producer, Tamagawa is a notable Japanese manufacturer of servos used in SCARA robots. http://www.tamagawa-seiki.com/products/servomotor/


Wouldn’t stepper motor with encoder and current sensor be cheaper?


I'm skeptical that a stepper motor would be able to support it's own weight and more without a gearbox. This is especially important for making serial manipulators. Also keep in mind that what they're aiming for here is repeatability so that researchers can easily compare their algorithms. That means using parts that don't vary too much and minimizing the amount of assembly researchers have to do. There are much cheaper knock off versions of these actuators, although they may not be as repeatable. At the very least they aren't as well documented.


Steppers are highly repeatable.


This is more about repeatable experiments than particular movements.

Regarding the price, the actuators seem to be ~240USD. A good stepper motor with the appropriate feedback mechanism to make it suitable for servo-like control plus a modern stepper motor controller that is suited for the robotics context will likely not be (much) cheaper and you have to hack together the servo functionality, tune settings, etc. - which seems detrimental if the goal is repeatability accross teams. I'm not in the target audience for these robots either but from the perspective of robustness and repeatable research they don't look too shabby.


That wouldn't work for a robotic hand.


I wonder how many people are upset that the vehicles had to drop them off in a valid passenger load/unload zone as opposed to the usual Lyft / Uber tactic of parking in a no parking / no stopping zone, bike lane, crosswalk, etc because it's most convenient for the drivers and passengers (at the safety and expense of everyone else sharing the space)...

That quote at the end:

> I guess Lyft has me spoiled. I like getting dropped off in front of the place im going too [sic] not just in the parking lot....


Cyclist here. In my experience, the majority of the time a driver stops or parks in the bike lane in an urban area in the US (e.g., I live in Austin), there's a legal parking/stopping spot within a reasonable walking distance, often within 50 to 100 feet. (If this isn't true where you live, consider the difference in the location. There are probably exceptions too. I'm told that legal parking isn't typically close in SF.)

Then again, my idea of "reasonable walking distance" seems longer than most people's. Having spoken to many drivers who have parked in the bike lane, I'm amazed by how negatively some have reacted to me recommending that they park as little as 50 feet away. In some cases the non-bike-lane spot is closer but the convenience of pulling to the side of the road rather than doing a more complicated maneuver seems irresistible.

If Waymo follows the law, good for them. Makes me more likely to be a customer of theirs in the future.


In the right light, this is a competitive advantage for Waymo. Prove that it's possible to have a ride hailing app that strictly follows municipal stopping/parking rules, and then encourage cities to start strictly enforcing those rules and ticketing offenders. Self-driving cars would presumably be better than humans at following those rules (at least, if we're imagining a world where self-driving cars work safely and consistently).


A somewhat ironic form of regular regulatory capture, in that it should have already been captive...


I often see waymo vans near San Antonio and El Camino in Mountain View. It's kind of a nightmare drive for all parties. Curbside parking is allowed, there are no demarcated bike lanes, and much of the road is in suboptimal condition. There is often construction going along sidewalks and buildings, and uber dropoffs are common. You occasionally see cyclists, though I suspect most stick to a side street.

What I suspect people are complaining about is that Waymo doesn't do curbside dropoffs at locations with a parking lot -- not common biking routes. I bet Waymo doesn't have the data to know whether a curb is painted yellow, blue, or red, and just avoids them, while a Lyft driver would probably put on hazards and drop people off at yellow curbs and bus stops.


Mountain View isn’t even close to a challenging environment. I would like to see Waymo try SF on the same routes as Cruise.


I have raised this point many times before.

Uber/Lyft drivers break the law dozens of times a day. In fact the entire experience is predicated on their ability to pick you up/drop you off in places they shouldn't e.g. out the front of your house.

I guess self driving cars will be closer to Uber Pool in terms of experience.


It seems like there's an overall issue where people feel they can't do anything if the car takes a weird route or drops them off in the wrong place.

Another comment mentioned in the story says the car skipped the drop-off location and inched passed a bus stop, and other people mentioned inefficient routing.

I don't know how the system works, so this might be user error of some kind, but plenty of people are not going to want to get in a taxi if they feel like they have no control over where it's going or where they can get out.


This is one of the issues with machines vs. people. For an able-bodied person, like I am most of the time, sure I'm fine with being dropped off half a block away. The driver may ask me and I'll be "fine." If someone is using a walker--not so much.

I have to believe the last block or two problem will be a big issue with self-driving whenever it eventually arrives.


It’s mainly a different lobbying tactic:

- Uber and Lyft and more toe-stepping and will encourage their contractors to paint outside the line, deal with the consequence once the administration has caught up with them and is presented with the fait-accompli that this is voters’ expectations now.

- Google/Waymo has better relationships with local authorities and can obtain the permit to drop people off after they’ve proven they are playing within the line — and can wait for, and eventually finance urban furniture changes.

Both use people’s expectations, but differently.


The title (as in, the text between <title></title>) on the page served to me was:

"Las Vegas to Elon Musk: Tesla Tunnel? We'll Take 2 - CityLab"


While you're waiting for the main event to start, here are some recent interviews with Elon about self-driving cars. He's very confident.

"To me right now, this seems 'game, set, and match,'" Musk said. "I could be wrong, but it appears to be the case that Tesla is vastly ahead of everyone."

I am eager to see what they unveil today.

https://www.youtube.com/watch?v=dEv99vxKjVI

https://ark-invest.com/research/podcast/elon-musk-podcast


Hmmm so then why is Tesla ranked last for autonomous driving by third party researchers?

https://www.google.com/url?sa=i&source=web&cd=&ved=2ahUKEwiR...

And Elon has a long history of making false claims about Tesla’s progress. For example in 2015 and 2016 he claimed that Teslas would be fully self-driving by 2018.

So why shouldn’t we be skeptical?

https://arstechnica.com/cars/2019/03/teslas-self-driving-str...


But Navigant curriculum was very unscientific. There no actual quantitive reason Tesla is worse. Is was based mainly on business factors like go-to market strategy and vision.


As opposed to the "scientific", "quantitative" reasoning behind Tesla being the leaders in FSD?


Yes absolutely saying Tesla who gets camera data from it's a half million car doesn't give it an advantage is crazy. That not even including the fact it's the only company who can do its strategy. Google would need to get constant data and GM and legacy automakers would need sensor suites on all it's cars yesterday.

No one knows if Tesla strategy will work because they don't have the data collection in place.


Neither does Tesla which makes it a moot point.

They have no way to store or transmit the massive data you are describing off the platform do they?

My understanding is that they have very limited storage and transmit onboard.


Based on their talk today and Andrew previous talk where he shows explicitly tools that do just that download data constantly is exactly what they do. https://vimeo.com/274274744

I mean saying a phone can upload videos to youtube but a can can't to tesla is a weird ledge to stand on. Even their windshield wipers work based on sending video data to tesla to be learned on.


The first article you link to sources another article as its source, which itself calls bullshit on the ranking.

Your link:

> According to Electrek, Tesla trails behind other companies in terms of autonomous driving tech based on a list created by Navigant Research, an independent research firm.

Electrek’s article:

> Electrek’s Take

> I think Navigant’s autonomous leaderboard is ridiculous. There are way too many brands that keep most of their development under wraps, which makes it hard to evaluate them and therefore, it gives very little value to a leaderboard like this in my opinion.


What is your point?

Electrek is not exactly unbiased. It's literally called EV and Tesla news. Fanboi site's opinion should probably be taken with a grain of salt.

Here's the Navigant executive summary directly: https://www.navigantresearch.com/reports/navigant-research-l...


What other major manufacturer has anything close to Tesla's autopilot in a car I can buy today? As far as I know, no one.


Only really GM with Supercruise on one of their cars, the CT6. And it is not advanced as Autopilot.


Some big time source laundering going on in here, https://electrek.co/2019/04/19/tesla-falls-autonomous-drivin...

Your "third party research" is obviously bullshit, they go as far as including Apple in their ranking.

This right here is just typical worthless marketing press release spam from a management consultancy firm.


My guess is he means "on the highway". The scary bits of self-driving is person detection, crossing detection, roadwork detection, cyclist detection (e.g. coming up on the right when you are trying to make a right turn).

The Waymo end-game that I heard was "able to go through a drive-thru". I highly doubt Tesla is anywhere near that point.


There have been news reports about the model 3 autopilot getting its speed limits from maps, lacking any sort of sign recognition or manual override to adjust to local conditions. The maps seem to be outdated for germany (1). That’s an essential feature even on the autobahn. Given that test result I’d even be skeptical about any claims of being ahead of the game on the highway.

(1) https://m.heise.de/autos/artikel/Test-Tesla-Model-3-4400919....


This is very strange though, is there any confirmation of this?

Basically, most other manufacturers like Opel, Audi, Mercedes, Hyundai, VW, Volvo, Ford, etc. has had for several years the feature to detect speed limit from computer vision recognizing the road signs. And it works reliably, as is pointed out in your link.

How can Tesla be a leader in using computer vision for cars, but not be able to read the road signs?


> through a drive-thru

The kind of drive-thru that Tesla is currently associated with involves semis rather than fast food and it would be really nice to hear that they've at least licked that particular bug (and for good, this time).


what does "for good, this time" even mean with their regression issue?


That was exactly the point, the fact that such a thing could happen, be fixed and then happens again in something mission critical is very scary.


> The scary bits of self-driving is person detection, crossing detection, roadwork detection...

Your point it very astute.

Among a few other ML/AI MOOCs, I completed Udacity's "Self-Driving Car Engineer" nanodegree - so when I'm out driving, I often come upon situations where I wonder "how would a self-driving car navigate this?"

Today, driving in to work (note: USA), I noticed one intersection I've been through many times before, and that question came to mind. The intersection is interesting, because on approaching it, the road curves to the right, and you can actually see one of the traffic lights on the left before you even see the intersection. By the time you see the intersection, you're already on top of it.

So as you round the curve, you see the lone traffic signal (red/yellow/green); if it is red, do you start to brake, or do you wait until you can "see" more traffic signals? If you wait - will you have time to slow down and/or stop? ...and so forth.

This and others are all kind of "edge cases" that will need to be trained on, and/or perhaps other cues for self-driving vehicles installed or set up so the vehicles can navigate such areas successfully. I know when I first went through the intersection it was a bit of a surprise; it's not a very safe intersection (going home in the opposite direction is not any better; in that direction, you're headed downhill, have to cross the intersection, and immediately start turning to the left after going through - the curve is really abrupt, and you have protected/unprotected left-hand turns both directions, etc).


Well they’re vastly ahead in one area: data collection. No other company is even close. You could argue about the quality of data but the platform is there and ever growing, and they can upgrade their hardware in the future and augment existing data.


Don't kid yourself, the car has no bandwidth storage or performance to send back anything other than a few raw frames from disengage events or other rare triggers.


Why would you say that? The car has LTE and connects to WIFI. It could easily send way more data than any care company at any time including over WIFI.


And we don't pay for the LTE bandwidth. Tesla covers the cost of uploading the data.


>It could easily send way more data than any care company at any time including over WIFI.

Except that it isn't, and even Karpathy said the quantity doesn't matter, it's the data quality.


What they are sending way more data because from our knowledge GM and Ford are sending back 0 data and Waymo doesn't have half a million cars worth of data internally to pick from.


>from our knowledge GM and Ford are sending back 0 data

Yes, two of the autonomous vehicle leaders are not using any data whatsoever.

I thought this was the smartest forum on the internet?


It depends entirely on how they design the system. They don't necessarily need to send all the data from the cars back home when they can send test cases to cars, run the tests in a shadow mode to collect real world results, then send the test results back home.


The presentation makes it clear your claim is entirely false, you should watch it.


Which presentation did you watch? Karpathy said specifically "it's not a massive amount of data, it's just very well picked data" when talking about how the cars only send data when one of the configured triggers fires.


There’s a large gap between ‘a few frames’ and a massive amount of data, and the amount sent lies somewhere in the middle. Clearly they can’t send all data (nor would they want to) but it seems it is sufficient for significant learning to take place and the examples shown were good quality over at least a few seconds, so hundreds of frames for each example.


No, it's spot on. It's entirely what I said: the car can only deliver a few raw frames, and only in response to particular triggers.

Notice the cherry-picked examples in the presentation. There is a whole class of problems the field cars can never help with, since they lack the dead-reckoning sensor setup and precise odometry a development car would have.


They showed video in the presentation which was clearly not ‘a few frames’, unless by a few frames you mean seconds of video.


> There is a whole class of problems the field cars can never help with, since they lack the dead-reckoning sensor setup and precise odometry a development car would have.

Can you give an example? I'm curious what kind of triggers strictly require lab-calibrated hardware.


Short video clips from all cameras are sent back to Tesla when associated with a disengagement event, queued for upload when the vehicle is on wifi.


I hope to god they are sending back short video clips randomly sampling all driving conditions, not just the disengagement events.


They are.


I wonder if Tesla is getting subpoenaed for video clips.

Other than for accidents, the SEC investigation, etc.


My FOIA requests say no, but lots of blind spots. I'm not operating "at scale" due to the cost involved with non-electronic FOIA requests.


Thanks.

I can imagine that police could mine this just like they're doing with Google geolocation data.


I hope Tesla has strong governance controls over customer data, and a fierce inside counsel for pushing back against unnecessary or overly broad LEO requests.


how can you claim that? more than waymo? that would be extremely doubtful. google has been driving around cars with sensors and cameras for over a decade.


They have a fleet of hundreds of thousands of cars driving real-world miles all over the developed world.


It's still impossible for an outsider to tell - Waymo logs every single vehicle-mile in their entire fleet, but Tesla samples from a larger pool.


But they don't have the cars. Every car on the road that is a tesla sends back data.


Maybe too much confident if you ask me..


I agree. The interview with MIT researcher Lex Fridman was difficult to watch because it didn't seem like they were on the same page at all - Lex asking thoughtful and pointed questions and Elon dismissing them as if the questions themselves are moot because self driving is right around the corner.

It was mind boggling. I am hoping Tesla can provide some specifics today because it seems Elon is living in a fantasy world (albeit one I'd like to live in if we can actually get safe self-driving cars).


I'm gonna say Elon is being extremely bold selling a technology that's current leader in deaths behind the autonomous wheels.

also I can't reconcile how the new hardware is this huge leap ahead beyond raw computing power if, by Tesla own claims, previous hardware was perfectly capable of autonomous driving.

seems people were getting fooled either now or before.


I hope Elon has tested the autopilot in Finland during the winter then.


No, as I understand it, that was the "old way" of doing it. The "new way" (which I think is called Project Crostini) is much smoother.

I got a Pixelbook a couple months ago and it was as simple as going into the Chrome OS settings, clicking the button to enable Linux support, and then it sets you up with a terminal to Linux. I've had no issue accessing the Linux environment / apps between boots.


You can go into the Chrome OS settings and click the button for Linux... after a few minutes of downloading/installation you're ready to go. I use it for development.

https://www.aboutchromebooks.com/news/chrome-os-69-stable-re...

https://chromium.googlesource.com/chromiumos/docs/+/master/c...


Looks like a great force-multiplier tool to assist the fake product review business!


It's still far more economical and effective to hire real people to type those. Plus, it's easy to recognize fully conciseness-optimized English. And when leaving reviews, your average person doesn't write the most concise English possible.

Neural networks are capable of optimizing English. The knowledge and capacity to do this is already globally widespread. Sorry to be the one to tell you.


Glad you agree. I suspect QuillBot is not ideal for fake reviews anyways since you would likely want a diversity of positive opinions rather than the same ones regurgitated. What I'm really excited to see is, where this technology goes in regards to education and writing enhancement. (For clarity, I'm the CEO of Quillbot)


Your product is a good tool for teaching improved English. A tutor once told me, for a standardized English test, the shortest answer option that still sounds natural is likely the right one. I nearly aced it.


Well, sure... if you're looking for an "invisible idiot" solution. This short passage from Terry Pratchett:

"The expression on the face of Lord Havelock Vetenari was, for a moment, a picture. And it was a picture painted by a very modern artist, one who had been smoking something generally considered to turn the brain to cheese."

was rendered as

"Lord Havelock Vetenari's expression on the face was a picture for a moment. And it was a picture painted by a very modern artist who generally thought to turn the brain into cheese, smoking something."

Language is hard.


Like I commented on below, the system is still imperfect. Its about a level 2 safety if compared to a self driving car, and it will not being doing stunt tricks any time soon. That being said, we are hopeful that it is only a matter of time.


Yup. Why does it avoid saying what this it ? it's a spintax tool.


The main difference between Quillbot and spinners, is that a majority of people we've surveyed say they use QuillBot for suggestions on their writing. I'd give it a good chance on providing a better sentence structure if you are a non native speaker.


Also a force-multiplier for the plagiarized essays industry!


"Additionally an impetus product for the misconduct treatise hard-working"...


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: