Hacker Newsnew | past | comments | ask | show | jobs | submit | dhruvbird's commentslogin

Or waiting weeks could force you to think about the problem harder and eventually figure it out because you just spent more time on it this time.


How combustible are these batteries compared to the standard lower density ones, and if one of them catches fire, how easy/hard is it for the fire department to get it under control?


Did the blog mention any metrics like what the model size is etc... since that seems to be the one of the motivating factors?


Facebook, PyTorch Mobile Team | US-Remote only | Full Time | Software Engineer (1 position available)

We’re the PyTorch Mobile Team (https://github.com/pytorch/pytorch/) working on making PyTorch broadly available for a plethora of mobile devices both within Facebook and outside. We’re interested in motivated engineers in this space who are willing to work remotely (within the US).

PyTorch is the most popular AI framework within the research community, and we’re working on making it production ready (especially on mobile devices).

You can see here (https://ai.facebook.com/tools/pytorch/) that the journey for Mobile (marked experimental) has just gotten started, and you’ll be jumping on to the bandwagon as it starts to leave the station!

Please drop us an email at (agaurav at fb dot com and dhruvbird at fb dot com), and we’d love to chat if this sounds interesting to you!


What's the tl;dr for this article? What's the key insight behind making compile times faster?


Runtime Compiled C++ (code avaialble on github https://github.com/RuntimeCompiledCPlusPlus/RuntimeCompiledC...) allows you to change a C++ program whilst it's running.

It uses shared libraries (DLLs) to do so, but manages the creation of these for you so you don't have to set it up. Only the required changes are re-compiled and linked into the shared library, which improves turn around times over approachs such as UE4's Hot Reload which recompiles the entire game lib.


Dynamically unloading/reloading DLLs to re-update code.

Cool if you can pull it of, but seems very house-of-cards. All you need is one bad reference floating around and boom.

I kinda wonder what this would look like in Rust. You could track all the lifetimes appropriately and make sure nothing got a handle into code that could be dynamically unloaded.

You'd keep all the native performance gains but without being nearly so brittle.


We included crash protection into RCC++: http://runtimecompiledcplusplus.blogspot.co.uk/2011/09/crash...

I use RCC++ daily as part of my development routine, and although you can introduce problems which mean you need to relaunch the program, the fact that you save having to do so most of the time is a big win for iteration.


Yeah, I'm all for fast iteration, was just musing aloud if there's some extra benefits to using Rust.

For instance your crash protection catches execution failures, but what about data failures? If you've got a write-past on an object nothing is going to be keeping you from destroying the rest of the world(unless all allocations in a hot-reload scenario are constrained to separate pages that you can protect?).

Seems like a lot if things you're trying to avoid by catching crashes are built-in to Rust.


The issues you mention exist in C++ without runtime compilation, so don't pose a particular blocker to using runtime compilation when you want fast iteration in C++. This is all intended for development environments, so 'destroying the world' simply means closing the application and restarting, which without RCC++ is something you have to do every time.

So there may be benefits to using Rust, but the issues you've posed are not ones I've found were my primary concerns when working on code which needs fast development iteration.


Hi, me and a few others in the Rust community kinda hashed out an plan of how you would do safe code un- and reloading Rust. Its nothing official, and has no implementation or official RFC/proposal yet, but the idea goes something like this:

- Rust as-is assumes program code/static memory is always valid/has the 'static lifetime. This assumption is implicit in the sense of function pointers, trait object vtables, static variables etc all having no lifetime bounds on something else than 'static, which means unloading their backing memory would cause unsafety.

- Thus, in order to make safe code unloading safe there would have to be a new, optional compile mode in which the compiler would treat "static" things like function pointers, trait objects, statics etc as non-'static, eg by giving them the new concrete lifetime "'crate". So if you have a "static FOO: T;" in a unloadable crate, you could only get a "&'crate T" to it, and you could only coerce to trait objects with a "+ 'crate" lifetime bound.

- 'crate would be similar to 'static in that it represents memory valid for the lifetime of the crate, but unlike 'static would not imply being always valid, and as such the borrow checker would prevent a bunch of operations for it like subtyping with any other lifetime, or usage with API's that want 'static bounds like thread spawning.

- Because 'crate is distinct from 'static, mixing unloadable and loadable code would be safe since the regular lifetime checking would ensure correct interaction between both. It means unloadable crates need to be explicitly written as such though.

- There are some major complications with generics and other compiler-generated glue code like vtables that are not fully hashed out yet: The issue there is that machine code corresponding to an upstream dependency gets compiled into the binary of the current crate, which means you would have code that typechecked with the assumptions of living in one binary be generated in another. Solutions here include banning generics and trait objects to types from extern crates, re-checking generics at instantiation location similar to C++ templates, or adding a new feature for instantiating generics outside the current crate, similar to "extern template" in C++.


Keep in mind that when you have a dll you are unloading instructions, not data.

Also C++ has unique_ptr that also tracks lifetimes, I don't think Rust would really give you anything you don't already have and I think you are overestimating the fragility. I've done hot reloading of .dll and it isn't really all that fragile. If you have the c runtime as a dynamic .dll the heap won't be tied to the .dll either.


uniqe_ptr does not "track lifetimes".

For example, this Rust program:

  fn main() {
       let r: &i32;

      {
          let b = Box::new(i32);
          r = &*b;
      }

      // r is dangling here, this would be bad:
      println!("{}", r);
  }
Is caught at compile time. Rust keeps track of the lifetime of both b and r, and knows that r will be dangling.

The C++ version:

  #include<iostream>
  #include<memory>

  using namespace std:

  int main() {
      int *r;

      {
          uniq_ptr<int> b(new int(5));
 
          r = &*b;
      }

      // r is dangling here
      cout << r << endl
  }
This compiles with no warnings under -Wall -Wextra with no warnings, and happily does whatever you ask with r, even though it's dangling.

Yes, this specific example is a bit contrived, but such is the way of examples. My point is just that C++'s smart pointers are a great thing, but that doesn't mean they do everything that Rust's do.


unique_ptr does not track lifetimes in any way. Nothing in C++ does.


Should be able to tie the lifetimes togethor with something like weak pointers/shared pointers but more task specific such that it blocks the unloading/reloading until all shared references are gone but the weak pointers can still connect to new versions. Shouldn't be brittle, but "shouldn't" may or may not be a four letter word.


The TL;DR for this article is, approximately: "Our heads are so far up our asses, that the only choices we know about are somehow compiling C++ at run time, or using a slow, interpreted scripting language. Specifically, we've never heard of dynamic languages with good compilers that are available at run time."


Thanks for this insightful comment. I shall remove my head from my arse (I'm British) for just a moment so that I can see the monitor sufficiently well to type a response.

The article covers a number of areas where scripting languages fall down in practice as a solution for fast iteration in many area of game development. Please see section 15.2.1.


Great you have a dynamic language with a good compiler, you at least need to have: Strict coding guidelines to ensure performance, tools to get back what is lost without static typing.

No tool is perfect. Don't knock solutions to problems you don't have.


> The TL;DR for this article is, approximately: "Our heads are so far up our asses

Aw come on. Don't make Lisp programmers look even worse. It's embarrassing.

Also, this breaks the HN guideline against calling names in arguments.


Come on. That has very little to do with 'Lisp programmers'. It's mostly his personal problem.


Sure, but sadly that very little is enough to have created a big perception, which definitely wasn't one person's creation.


Does that make sense?

I fear it doesn't.

It wasn't your best attempt at moderating. Just my direct perception.


That's a pretty bold statement for not providing any suggestions.


@kbenson Your explanation makes a lot more sense. Increasing the hash table size probably didn't affect the perf. significantly, but binding to the ANY ip did.


What does clashing column mean?


Two commits touching the same file, represented by the identifier at the end of the rebase's commit line. If you look at each file as a column, you can see which commits would clash.


Potentially clash; the commits may touch completely different parts of the same file without conflict.


This is the main prize. Using emacs + magit I can rebase and look at per commit files whilst having a buffer open to re-order commits.

In fact magit could probably do some of this with a hook on reordering commits which would not require more git metadata.

I'm not sure I care about any of this though as if I do rebase, reorder and then hit a conflict I can't resolve I can always abort the rebase.


True that!

My original idea was to somehow show the actual conflicts, but then I figured this gets the desired 80% for 2% of the effort...


Any simple explanation for this? I'm interested in learning more about it.


Author here. The idea is a mix of many things. This is not how we write it in the paper, but just for intuition.

If Σ(S) is the set of all subset sums of S, where S={s_1,...,s_n}, then

Σ(S) = Σ(s_1)+Σ(s_2)+...+Σ(s_n).

Here A+B = {a+b|a in A, b in B} + is associative and commutative. Now we want to add parenthesis on that formula in a way such that it is fast. Similar to matrix chain multiplication. We make sure + can be fast if certain property are satisfied(Theorem 2), and the rest is just figuring out the right way to add parenthesis.


People try to make the fit be as tight as possible to the sample data -- the explanation is that simple. I don't buy the explanation provided in the article.


Pretty good point. That's how I felt.

Additionally, this setting is probably too close to usual situations you get in school where there is little to no interaction and negative answers from the teacher are seen as failures by students. (Speaking about education in my country only.)


@bedhead hit the spot with his comment "Circumstances matter. I suppose if Uber hadn't entered into a partnership first, the whole thing would be more palatable. But by going this route, Uber is sending a clear signal to all potential future partners - watch out. I just find it distasteful, and sadly predictable given Uber's history."


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: