Realistically, if it's equal splits, they probably each own ~5% of the company (By that stage, the founder shares can be down to ~20%)
If they are raising $175M, their dilution is probably in the ~15% range, putting their valuation a bit north of $1B.
So, each founder probably is worth, on paper, around $50M right now. Again, this assumes equal splits. It's possible that the CEO has more shares by virtue of assuming greater responsibilities.
If you are long US and/or the logistics portion of on-demand, digitally assisted marketplace economy, it's not a bad idea.
For example, it's impossible to invest in just AWS without AMZN's other business. However, AWS is far less susceptible to the current macro geopolitical instability than AMZN's retail business.
This is not entirely true. Yes, Ballmer made many mistakes, but he also architected one of the most sustainably profitable enterprise software businesses (on the heels of all MS software products we love to hate). These enterprise customers are more than knees deep with Microsoft, and deep account penetration bought at least 5 years for Satya to steer the ship in the right direction (giving up on B2C mobile, Windows obsession, etc.)
One of the most common biases I see in how we assess leaders is how we try to attribute success & failure to a single person at a discrete point in time (ex: Here in the US, the extreme Left claims Trump is taking credit for Obama's foundational effort for the economy while the extreme Right says it's all Trump). In reality, it is never that simple nor discrete but rather complex and continuous.
I have to point out that while their marketing copy claims that they want to know "as little as possible" about their customers, they have Segment's JavaScript tag on thehelm.com
Also, no actual content appears on the page without allowing third-party JavaScript. Even though I'm interested in hearing about new mail services, I chose to close the tab rather than spend time figuring out which of the 17 third-party domains I should white-list in order to see some text.
I doubt I'm the only one who made this choice. For some of us, content is still the reason we use a web browser; that includes those of us who value privacy and minimising the amount of arbitrary code we run on our devices.
>As a final note, I want to add that if anyone was a “cofounder” of TechCrunch it was Heather Harde. She joined in 2006 but she was in the trenches with the team until the very end, working 20 hour days, sacrificing her personal life and giving everything she had to make the company what it was. Heather never sells herself like Keith does, but she should. Unlike Keith, she helped build that company, and gave way more value than she ever took.
As someone who's also been an early employee, I totally see why Heather does NOT claim to be a co-founder. When you work your ass off alongside the founders, you learn two things:
1. Building a company is incredibly hard.
2. Nobody takes it harder than the founders.
Yes, I have worked hard. In the last six years, I always worked parts of my weekend if not my vacations (which were few and far in between early on). I pulled off all nighters as well as a last minute trip straight from work to close a customer. I fell asleep on my laptop answering support questions. There were many nights of dry tears and waking up in cold sweat from nightmares that were all too real.
But what I went through is a fraction of what the real founders went through. And I saw their ups and downs up close.
Startups are hard. Incredibly hard, especially for those who has to lead the ship from Day One. Early, dedicated employees see this first hand, thinking to themselves, "Man, I thought I had a good/bad day, but they are probably having it better/worse". That's why we never claim to be the "founders".
> Sacrificing your personal life for your employer is pathological, not heroic
This isn't an article about heroism. Keith Teare claims he co-founded TechCrunch. Arrington says "no" and then details why.
One part of the "why" is about value-adding output. Arrington "could never get [Teare] to write anything, or help pay any bills." Co-founders add value; Keith didn't do that.
The second part of the "why" is about input. Output requires input. But it's fair to highlight both, in part to block claims of unappreciated work. In highlighting Heather's "20 hour days" and "sacrificing [of] her personal life," Arrington draws into contrast the difference between someone he considers a co-founder and someone he doesn't.
> 20-hour days are not actually worked
Founded a company. Worked twenty-hour days. Deceptively easy to do if you're chasing a short-term deliverable across multiple time zones. (Short-term because this tactic is obviously unsustainable.)
The first 80 hours of a work week seem the hardest... the next 60 fly by in a daze of mania. It's hard to remember how to sleep and not think about work all the time, but your productivity is constantly falling. It works best for projects you can literally do in your sleep and anything requiring decision making, detail, or creativity should be done in the first day. If mistakes aren't very obvious when assembling the final blocks at the end, you'll miss them. Camaraderie is important too!
I've been a lot of places, and I have some experience with sleep deprivation. One ought not assume too much.
If you're regularly working 20-hour days, as opposed to the occasional all-nighter, then you're not doing anyone any favors -- not your company, not your friends/spouses/children, not yourself.
But anyway I was alluding to the tendency to boast about how little one sleeps...
I wish companies understood this. Part of my boring non-startup job consists of occasional weeklong sprees of 20+ hours/day, staying on the jobsite the entire week, with no additional compensation.
I'm definitely dead inside by the weekend, and noticeably affected after the first day. The logistics of what I do probably really does make it somewhat of a necessity, but I wish there was at least provisions for comp time afterwards or at least some kind of bonus pay.
So have I, but not as a regular thing. Working "20-hour days" implies it's regular, and that means -- assuming no commute -- you're sleeping max 3 hours a night, which is pretty messed up.
o/t but I'm surprised this random comment touched a nerve here...
I think you just read it differently than the rest of us. I read Arrington as saying she worked the occasional 20-hr day regularly. In other words, during particularly bad stretches, which were fairly frequent, but not every day.
I agree that almost nobody could actually sustain working 20-hr days every single day.
I agree on both. Honestly, much of my long hours were due to my inexperience rather than supernatural productivity. If I were to do the same work with what I know now, I'd do it very differently with far fewer hours (probably).
That said, the point OP was making is that in a startup origin story, there's a significant perception gap between those who actually moved the needle and those who claim to have done so. Those who actually drove the bus and those who rode it. Time and again, I've seen the latter people take advantage of the relative reticence of the former.
It's probably more like, "relative to this guy who tried to kill my company early on and later claimed undue credit for its founding, this dedicated early employee surely seemed like she worked 20+ hour days and sacrificed everything for me."
> ... much of my long hours were due to my inexperience rather than supernatural productivity.
It takes a fair bit of experience with long days to know that the feeling of doing work and doing 'stuff' pales in comparison with the productivity of doing that work well while rested and focused...
Overtime creates 'undertime', cleaning up tired mistakes takes longer than doing them, and when it comes to coding nothing is worse than wasting 15+ hours implementing something you didn't think clearly enough about to start with, and realizing it was all wasted/could have been done in 1.
> this dedicated early employee surely seemed like she worked 20+ hour days
I feel the discussion of this line is perhaps conflating "worked 20 hour days [on the rare occasion something crazy was happening]", vs "worked 20 hour days [from 0400 hours to midnight every day]". Just because someone worked some 20 hour days does not mean they worked 20 hours every day :)
When I was in startup-land, if we had a big deliverable we'd work until we couldn't work, which usually was about noon to 8am. Then everyone would go home and sleep it off, and the next day we'd try to not do that again... sometimes successfully, sometimes not.
Now that I have a "normal" job, I still occasionally do a big coding sprint of 20 hours or more, but I don't do it day after day.
In startup land, when the CEO came in at 7am and saw us frenetically coding away in the Nerd Loft, I sure hope he told everyone we all worked 20-hour days. :-)
(a) If you did even 75% of what you just described you did more than many founders do.
(b) just because one is a founder one doesn't have magically more energy than other humans.
(c) just because one is a founder doesn't magically mean one earns a bigger share of the success. An early employee shares most of the cost (time spent, fears, real danger of having no money next month, possible conflicts with the law) but doesn't get that much of a share when getting rich. Idolizing this economically unreasonable situation doesn't make it better.
(d) I'd say the best enterpreneurs enjoy the thrill. So they actually go through a lot less negative stress than their employees.
Programmers continue to discover and re-discover APL and array-oriented programming.
If someone is serious about array programming/APL and wants to make a lucrative career in it, study KDB+/Q/K. There's a small but active community around them.
Just to relate an anecdote: I used to be a fairly active KDB+ programmer in finance but left both finance and programming ~8 years ago. Just yesterday, I got an email from my friend who's building a statarb fund around cryptocurrencies. They were going to use KDB+ and offered me a job for "300-500k base with incentives"...
My friends clearly don't know that I now work in sales & marketing :D
Aside: Roger Hui, the creator of J, is a programming sibling of Arthur Whitney, the creator of K/Q and the founder of www.kx.com
Did you think finance programming industry (specifically around the KDB+/Q/K niche) fostered a "bro" culture (think wolf of wall street)? It always seemed like an interesting field, but every once in a while I hear discouraging stories. Curious about your experience with it.
I think kdb / q developers are possibly as far from wolf of wall street as it's possible to be.
One guy in London who worked in kdb was a huge arse, the tens of others I've encountered have all been lovely. It's an insane little language, though I'm only a basic user.
return
// Start by generating all dates for the given year
datesInYear(year)
// Group them by month
.byMonth()
// Group the months into horizontal rows
.chunks(monthsPerRow)
// Format each row
.map!(r =>
// By formatting each month
r.formatMonths()
// Storing each month's formatting in a row buffer
.array()
// Horizontally pasting each respective month's lines together
.pasteBlocks(colSpacing)
.join("\n"))
// Insert a blank line between each row
.join("\n\n");
Much more readable than something like "(~R∊R∘.×R)/R←1↓ιR".
Is it more readable? I'm even less familiar with D than APL, and that D code only looks more readable to me right now because it's mostly comments.
Wouldn't adding 8 lines of comments help newcomers to APL, too? This seems like a rather unfair contest.
Familiarity counts for a lot. 25 years ago when my primary language was 6502 assembly, I would have said 20 pages of assembly code look more readable than either of these.
In the long run, though, in every field I've worked in, being able to turn 20 lines of code into 20 characters has turned out to be a huge benefit. Once you familiarize yourself with the vocabulary, you can operate at a higher level. Nobody fears concise names and symbols when it's in a context they understand. I think even the most APL-phobic would admit that "a←b+c" is more readable than "assign(a,sum(b,c))".
Everything you know how to read, and peppered with comments you know how to read is more readable than everything you don't, without such helper devices.
You don't just have to learn it (that's the easy part); you actually have to grok it, and that's the hard part.
The piece of code |/0(0|+)\ (all 9 characters of it) efficiently computes the maximum subarray sum[0] of an array using Kadane's algorithm. You'll either have to learn K or trust me on that.
While it is possible to break it down to multiple parts and document each, it is idiomatic to just use it and document the whole line. Or not document it at all, because experienced K people know that already. Why call a function "average" (7 chars) when an implementation (+/x)%#x is only 6? Furthermore, you know from reading it what the average of a zero length list is (NaN). Do you know what a function called "average" would return in this case?
I would say the answer to your question is that K is very different than other programming languages, and requires a different mindset. Somehow, attempts to give it a more mainstream face (e.g. article author's "Kerf" project) do not seem to take off.
I don't know either party but I'd hazard a guess that they're not really offering 500k for a K developer.
They're more likely offering 500k for someone who has a lot of experience managing financial tick data in a KDB environment.
The catch will be that not everyone who has read the KDB for Dummies book (OK, I made that up) will be able to walk into a stat arb hedge fund and add 500k of value.
Yes, I was wondering the same thing. Some guesses: some people may not be comfortable with the somewhat un-conventional syntax / semantics (e.g. no operator precedence, just right to left or as they call it, left of right :) (I'm fine with it myself), or might think you can only get jobs using kdb+/q/k in big cities like New York / London, and may not want to move there. Possibly low market demand (compared to mainstream languages), although high comp, may be another reason. Also people may think you need finance knowledge too, and may not have it. Those are some guesses, that's all. Interested to hear if anyone else has some ideas on this.
Edit: Just saw beagle3's comment after posting mine. Some interesting points there. Still wonder about the non-tech factors though, as listed in above part of my comment.
As a product marketer, I have to say that the name is very unfortunate.
1. Sundown is the name of a Markdown library written in C: https://github.com/vmg/sundown. That Sundown is pretty widely known and used among Markdown implementers. As such, when/if the OP's project takes off, it is going to be confusing (for awhile).
2. Also, the name itself doesn't strongly suggest that it's an alternative to Markdown. Do recall that Markdown's etymology comes from "(HTML) markup". For the Markdown-implemented-in-C Sundown, the name worked because it evoked its relationship with Markdown. In this case, that's not what the project is about. If the intention was to evoke Markdown, then they should have kept the "Mark" part of it, not the "down" part of it.
I used to use KX/kdb/Q/K daily for several years. I wrote a full implementation of reinforcement learning (15 lines), a lightweight MVC framework (to show reports and tables in an internal webapp) and even a Q syntax checker (abusing table as a data structure to hold parse trees). Good or bad, for the longest time, Q was my "go-to" programming language.
Based on that experience...
1) Yes, but that's not huge by modern standard.
2) Q is a DSL version of K. As others have commented, K is a pretty clean implementation of APL, and Q makes K more approachable.
3) I have to agree here, but Q for Mortals makes up for it.
4) It is really fast. As we all know, a vast majority of us actually don't have terabytes and terabytes of data, especially after a reasonably cleanup / ETL / applying common sense. I suppose it helped that I worked in finance, which meant my desktop had 16GB of memory in 2009 and 128GB of memory on a server shared by 4-5 traders.
Finally, Q was never intended for general-purpose computing nor a widespread adoption. At least when I was an active user, the mailing list had the same 20-30 people asking questions and 3-4 people answering them, including a@kx.com (= Arthur Whitney, the creator). Back then, I'd say there were at most 2-3k active users of Q/K in the world. Now that Kx Systems is part of First Derivative and has been working on expanding their customer base, perhaps they have more...?
It is worth pointing out that really fast is ... well ... really fast. See [1] for some benchmarks they did for small, medium, large data sets.
The machines that $dayjob-1 used to build dominated the STAC-M3 for a few years (2013-2015) because we paid careful attention to how kdb liked to work, and how users liked to structure their shards. Our IO engine was built to handle that exceptionally well, so, not only did in-memory operations roar, the out of memory streaming from disk ops positively screamed on our units (and whimpered on others).
I miss those days to some degree. Was kind of fun to have a set of insanely fast boxen to work with.
OP could have phrased it better, but I presume his point was that 500KB is extremely small by modern standards. The whole executable fits comfortably in L3, so you'll probably never have a full cache miss for instructions. On the other hand, while it's cool that it's small, I'm not sure that binary size is a good proxy for performance. Instruction cache misses are rarely going to be a limiting factor.
> Instruction cache misses are rarely going to be a limiting factor.
k's performance is a combination of a lot of small things, each one independently doesn't seem to be that meaningful. And yet, the combination screams.
The main interpreter core, for example, used to be <16K code and fit entirely within the I-cache; that means bytecode dispatch was essentially never re-fetched or re-decoded to micro instructions, and all the speculative execution predictors have a super high hit rate.
When Python switched the interpreter loop from a switch to a threaded one, for example, they got ~20% speedup[0]; I wouldn't be surprised if the fitting entirely within the I-cache (which K did and Python didn't at the time) gives another 20% speedup.
Yes, I presume it's very fast because of a number of smart design decisions. I would guess that the relatively small on-disk size of executable is a consequence of these decisions, rather than a cause of the high speed. And as you point it, it's really the design of the core interpreter that matters.
When Python switched the interpreter loop from a switch to a threaded one, for example, they got ~20% speedup[0]; I wouldn't be surprised if the fitting entirely within the I-cache (which K did and Python didn't at the time) gives another 20% speedup.
I'm familiar with this improvement, and talk it up often. Since certain opcodes are more likely to follow other opcodes (even if they are globally rare) threaded dispatch can significantly reduce branch prediction errors. But despite not having measured the number of I-cache misses on the Python benchmarks, I'd be utterly astonished if there were enough of them to allow for a 20% speedup. My guess would be that the potential is something around 1%, but if you can prove that it's more than 10% I'd be excited to help you work on solving it.
I am not involved with k, and things might have changed significantly, but around the 2003-2005 timeframe, Arthur had very conclusive benchmarks that showed I-cache residence makes a huge difference (IIRC I-cache was just 8KB those days ...).
The people who surely know what difference it makes today are Nial Dalton and Arthur Whitney.
around the 2003-2005 timeframe, Arthur had very conclusive benchmarks that showed I-cache residence makes a huge difference
That sounds quite plausible. The front-end of Intel processors (the parts that deal with making sure there is a queue of instructions ready to execute by the backend) has made some major advances since then. The biggest jumps were probably Nehalem in 2007, and then Sandy Bridge in 2009.
It's not that binary size no longer matters, but you almost have to go out of your way to make instruction cache misses be the tightest bottleneck on a hot path. And when it would be the bottleneck, the branch predictor and prefetch are so good that it's usually only a problem when combined with poor branch prediction, so it really only adds to the delay rather than causing it.
In order for the Q interpreter to fit in that small size, the language has some rather severe limits. For example, function parameters, local variables and conditional branch sizes. Forcing users to structure code around these limits feels a bit archaic to me. This is what compilers are for.
Would be really interesting to read a write up on your experience. What do you program in now? How do you look at other PLs now? What do you miss and what are you happy "just works"? What do you think other PLs (especially languages like Lisp, which are very high in terseness) can learn from Q?
I would compare Q (and other APL-related languages) to Vim editor. There you have some carefully chosen operations which are easy to perform. They don't take much efforts. They are also easy to compose in useful ways - because the corresponding properties support that. Since the basis of editing operations is fairly large, you have many operations; but when you know many of them, you can work powerful edits.
Lisp on the other hand is more like Emacs - naturally. Here we have a small, carefully chosen orthogonal basis of abstract operations - not domain-specific, but "theoretically-foundational" small basis. Then you have a library of macros on top of that, and ability, of course, to extend.
In other words, basis for APL is "classical" math, made executable and expanded with mechanisms required to put in one line programming constructs (logic, control flow, ordering...). It's harder to expand, but you don't often need that. Lisp is a specific branch of math, lambda calculus, which is provably enough to solve arbitrary programming problem. The "inner core" of Lisp is also hard to expand, but what you expand for your task is "the usage" of the language, which is made to be straightforwardly expandable.