Hacker News new | past | comments | ask | show | jobs | submit | jonnycowboy's comments login

Not in this case. With SpaceX they also need high (de-entry, Mach heating) and low temperature (cryo fuel & oxygen) resistance but in the case of Boom the surface won’t heat enough to need high temp materials.

That said, I’m not sure their choice of materials is the good one for a startup.


CF prices have come down rapidly in the past 20 years and it's becoming the standard structural material for original airframes. It's more workable and less exotic than it was before. I think most, if not all, aviation startups are using it, like the Icon A5.


> CF prices have come down rapidly in the past 20 years and it's becoming the standard structural material for original airframes.

No and no.

CF can mean anything, but typically you end up with structural members that are heavier than aluminum when all is said and done.

Also, CF is used for sub-subsonic (actually sub-transsonic) skins, which SpaceX found out the hard way.

> It's more workable and less exotic than it was before.

No.

> I think most, if not all, aviation startups are using it, like the Icon A5.

Composites are used for subsonic planes and interiors. And the Icon A5 is a toy.

The one good thing about composite (as in the Diamond Aircraft models) is that it's hail resistant up to a certain diameter and intensity, around 1 inch. That has been helpful with some storms in Florida, for example.


In interaction with anything rigid (or even deformable), position control does not mean anything. Force (or torque) control defines the actual physical interaction.


We actually have very cheap and pretty powerful position-controlled actuators (hobby servo motors). Attach any kind of spring and displacement measurement device (potentiometer, hall sensor, optical, LVDT, etc) and voila, instant torque controlled actuator.

You can look up Series Elastic Actuators for more info or use this article as guidance (any spring will do as long as the force range and spring constant is adequate).

https://www.sciencedirect.com/science/article/pii/S240589631...


The servo they use is $493[0], at that price I wouldn't necessarily consider it a hobby servo. The control for that specific series elastic servo needs some work. There have been other attempts at making cheap series elastic actuators. An interesting one was the programmable spring work[2][3]. Although one problem with series elastic actuators is that they can be difficult to control because of the compliance. Force servos were also an interesting attempt at doing cheap force control[4] and by using load cells they avoided the compliance problem. Unfortunately, force servos did not have any position control.

[0]http://www.robotis.us/dynamixel-mx-106t/ [1]https://www.youtube.com/watch?v=sjFR4ACVLmk [2]http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.147.... [3]https://www.youtube.com/watch?v=_g79mOSvSsE [4]https://www.youtube.com/watch?v=sjFR4ACVLmk


It is also possible to use current sensing to measure torque.

ODrive + brushless + encoder (~150€ per axis) can provide current measurements, though you may need to create some PID loop.

If the brushless solution is still too expensive. You can use 775 brushed motors + encoder + BTS 7960 driver + (optional gearbox) (They have current sensing by using additional resistance (Edit:and RC filter) ) (10€-50€ per axis from china). Although that's not powerful enough for jumping robots.


Also interested in answers to these two questions, as well as OpenCL performance in vanilla linux (iMX6 and above).


Did you go to Sherbrooke?


Hah, yes, actually!


Sad to hear, I demoed a system with a Tobii sensor and it performed very well (even with my glasses). We were wondering about the effect of the constant IR blasts (are they continuous or pulsing?), thanks for the input.


Bring a old cell phone that's sensitive to IR light and place it in front of a tobii system. You'll see they're pulsing at a high frequency something like 30 lights at a >60 refresh rate per second, to the point you can actually see the light on the users face (using sensitive camera). Not all eye tracking companies put this much light on your eye's, but Tobii seems to be cutting corners - to make up for short coming in image processing. It's a brute force method to eye tracking.

The sensation you get after running a tobii is feeling like you've been up past midnight staring at a bright screen. Most people don't get that it's the eye tracker that makes you feel this way. Some of the other more respected companies take the amount of IR light placed on your eye balls seriously and try to drastically reduce it.

Talk to Tobii about it and they'll just bury it and say "there are no none health risks, or regulations about having that much IR light on your eye". Basically no one has set a threshold for how much IR light should be on a person's eye thus it isn't a problem "we should worry about".


Here are the safety standards regarding IR https://help.tobii.com/hc/en-us/articles/212372449-Safety-gu...


I work for a company that makes robot arms for assistive purposes (Kinova) so I have some insight to share.

1st a voice setup with Alexa or similar can really help.

With regards to phone use, some of our users have an attachment to put the phone close to their head and use their nose to "click/select" (they can move their head).

Eye tracking technology is really impressive these days (can be as fast as using a mouse). I've recently demoed a system with a Tobii sensor (https://www.tobii.com/) that was hooked up to a laptop, very impressive when combined with appropriate software (it handles scrolling, keyboard shortcuts, etc in a custom interface). I'm not sure with regards to phone/tablet use how well they integrate.

Ping me on Linkedin if you'd like to talk more.


The thing is that there was a lot about those old systems that was slow, so you were very, very careful how you programmed. You tended not to use vast library stacks, you went close to the metal and you coded in languages like Assembler, COBOL or FORTRAN. I/O was often run through specialised co-processors (such as IBM's channel processors) and the terminals could sometimes help too.

I have friends who have been looking after legacy applications for an airline running on Unisys. The core apps for reservation, Cargo booking and weight/balance were written in FORTRAN. In recent times, the front end was written in Java to give web access. They tried to rewrite the core apps but it was impossible to do so and get the performance.


>> They tried to rewrite the core apps but it was impossible to do so and get the performance.

Well, Cobol is a bit like the C of mainframes - you can manipulate memory directly and so on. You can't really do that sort of thing with Java.


a) if it was really running on the old hardware; in that case ruby on a modern machine would have been several magnitudes faster than the original code - at least because of the faster IO

b) if the whole thing was indeed running in an emulator, the emulation overhead would have negated all direct memory access advantages


Emulators on mainframes are much more sophisticated and performant than is typical on x86 and ARM platforms. The hardware and even software is often designed with emulation in mind, not just for backwards compatibility but for forwards compatibility, too.


Do you mean to say that they were running a mainframe emulator on a mainframe?


Compilers for some IBM mainframes have for decades (since the 1970s, I think) targeted an intermediate virtual machine instruction set which is then translated on-the-fly to the local architecture by the OS upon execution. So in the case of IBM their machines are truly built with both forward and backward compatibility in mind. The pointers for this instruction set have been 128 bits since the beginning, long before 64-bit hardware even came into being.

Some (or all?) of the latest Unisys mainframes run on Intel Xeons, but with custom chips for translating the machine code of their old architectures.

I don't work in this area. I just like reading about it. Though, unfortunately, it's difficult to find clear specifications and descriptions on how these architectures work.

For example, Unisys' 2200 ClearPath architecture is one of the (if not _the_) last architectures still sold that uses signed-magnitude representation, as well as having an odd-sized integer width of 39 bits. (INT_MAX is 549755813887 and INT_MIN is -549755813887, and the compiler has to "emulate" the modulo arithmetic semantics required of unsigned types in C. ClearPath MCP is also a POSIX system, and has to emulate an 8-bit char type.) I discovered that you could download the specification for their C compiler for free online, which was useful when discussing the relevancy of undefined behavior in C. But AFAIU (and this is where finding concrete details is more difficult) the latest models of the ClearPath line use Xeons with custom chips bolted on to help run the machine code of the older architecture. In any event, the point is that while the old architecture is arguably emulated, it's not a pure software emulation that you might assume, and the resulting performance is better than the previous models of those mainframes, which were still being built at least until a few years ago. In other words, direct memory access isn't ruled out because the I/O systems may have been intentionally designed to work efficiently in a backwards compatible manner.


The 128bit pointer intermediate code is used on what IBM calls "midrange systems" (ie. AS/400), not mainframes. IBM mainframes execute their machine code directly and the ISA is designed such that it allows for efficient virtualization since beginning and is extensible and backwards-compatible. Otherwise the IBM mainframe magic is in IO offload and truly immense memory bandwidth. On the other hand, Unisys systems use architecture that is significantly different from what today's programmer would expect, with completely different memory model originally implemented in hardware (which essentially combines the memory protection model implemented on AS/400 in software with lisp machine-style pointer tagging).


Excellent comment. I'll also point out that system z mainframes are not slow machines, so even with some emulation overhead there is typically enough performance.


Yes, but good luck translating any sizeable chunk of code to a higher level language without a massive effort figuring out which things are discarded side-effects that can be ignored and which things are relied on later. I just spent a few hours last night massaging a C-translation from 6502 assembler. It's a tiny piece of code - ~3000 lines that'll probably shrink to ~2500 or so as I figure out which results are ignored (the original translation attempted to do a faithful 1-1 translation instruction by instruction, which leads to things like long sequences to handle basic multiplication etc.), but it takes ages, because it is not always obvious when it e.g. depends on the status flags set, and values keep being moved between registers etc. Now try doing that with a big piece of code.

There's a reason why people often resort to emulators.


> The thing is that there was a lot about those old systems that was slow, so you were very, very careful how you programmed.

That's a common sentiment. I wish I could find the quote by someone who made the transition; it was about how happy they were to be able to compile so much quicker, and how getting immediate feedback made them so much more productive.


The notion of waiting ages for programs to compile or assemble is mostly related to the older hardware.

I compile/assemble COBOL and IBM's assembly language on a z13 daily and it's pretty much instantaneous.


> The notion of waiting ages for programs to compile or assemble is mostly related to the older hardware.

Oh, I was talking more about older ways to organize the data centre: batch vs timeshare processing.

https://en.wikipedia.org/wiki/Time-sharing


the real question of course would be how to train the AI to have varying levels of "expertise" (ie: penalize if score is too high)


For realtime games, maybe just add a little forced think-time for each action? You could even adjust that dynamically

it's necessary anyway to prevent it from having godlike micro.


Shorting insurance companies.


You have no idea how hard it is to short something do you?


That depends on what companies you want to short and your broker. I've shorted plenty where I just had to enter a few numbers and press a button...

There's also any number of companies that let you do the equivalent via CFD's (Contract For Difference). Be warned: Many of them allow crazy levels of leverage that can wipe you out with tiny little swings.

You're right it's not a given that you'll be able to "just" short whichever share you want, but it's hardly difficult to find a way of betting against a given company.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: