> For reasons that seem baffling in retrospect, Amazon understood this long before any of its major competitors
I don't think Amazon really "understood". It's more that (according to "The Everything Store", anyway), accidentally, some emergency servers were reprovisioned as servers on demand for a bunch of frustrated devs who couldn't get resources the traditional way. And then demand just exploded and they thought why not sell it and see what happens.
It is true that Bezos appears to have a knack for spotting great products that appear accidentally in his org tree, and spinning it out as a separate team (e.g. the A9 team, Kindle...) but it might be that he thinks like a VC and backs everything that looks interesting instead of letting the bureaucracy kill it, and hindsight is 20/20...
There was another essay from Steve Yegge some years ago where he was comparing Amazon vs Google (he worked for both companies) and how Amazon completely transformed themselves into using internal APIs as the only method of communication between teams within the company.
Yegge has recalled an email to all Amazon staff where Bezos outlined new Amazon rules. This was in 2002.
1.) All teams will henceforth expose their data and functionality through service interfaces.
2.) Teams must communicate with each other through these interfaces.
3.) There will be no other form of inter-process communication allowed: no direct linking, no direct reads of another team’s data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network.
4.) All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions.
Especially the last point of the memo is very telling because it shows that Amazon was planning to externalize artifacts of various teams within Amazon at least 4 years before AWS was actually born.
reading it the second time (last time I read it a year ago):
- Is ad business model a major reason Google is not providing a platform or hesitant about it? (Out of Amazon/Apple/FB/MS, FB is the only other primarily ad-based company (or is it?) so I wonder how FB is doing it). Because programmatic interfaces means developers can program out ads (unless there's a way for a service provider to avoid that situation, IDK).
- Or has Google moved towards platform/accessibility model recently? or is it same old same old?
A memo was published (or leaked?) in which he said, abbreviated and misquoted: "Our internal services shall be robust enough to sell as services on the big bad internet. Assuming friendly callers is a firing offense."
"Why not sell it" is a bit of a joke. Before you can say that, you have to have a platform that's designed to handle unfriendly users. Users who may be presumed to attack each other and the platform. Setting that up doesn't happen by accident.
Prediction #7 about the mobile revolution was particularly interesting to me at the time. I was working in the telecoms industry and everyone was saying that super-smartphones were coming out in just a year, 2 at most, that would revolutionize everything, but that was all marketing hype. The reality at the time was to my mind looking really bad. WAP was pretty much a conscious effort by the telecoms industry to trap itself (or actually the users) in a technological stone age ghetto run by committees of marketing executives. The fact that Apple forced the carriers to allow an open mobile internet was utterly shocking to me - in a good way. I was expecting mobile computing to be a dismal carrier-controlled dystopia for a generation at least.
> The reality at the time was to my mind looking really bad. WAP
shudders I worked in that space right around the time the iPhone landed, WAP was way beyond looking really bad, it was an unmitigated disaster, it was horror.
> The fact that Apple forced the carriers to allow an open mobile internet was utterly shocking to me - in a good way.
To be fair, high-end nokia did already have "open mobile internet" available. Technically anyway: compatibility was a craspshoot and the experience was terrible, I had an E70 with a relatively competent webkit-based browser (IIRC), it did technically render pages but:
* very small screen, making interacting with pages difficult (mobile-adapted pages were not a thing, and the browser didn't do much adaptation)
* arrows moving a small cursor around on a small unzoomed page made for a terrible experience
* device lacked both CPU and (especially) RAM, interactions would be very slow and browser would regularly crash and OOM
* flash was still very present, the device didn't handle it at all, it didn't cope that well with javascript either
After it was released Opera Mini did help with the resources side (but not UI and compatibility). Anyway my point is, open mobile internet was available before the iphone, it was just an utter wreck of an experience (though still not as bad as WAP)
> To be fair, high-end nokia did already have "open mobile internet" available.
Not really sure this is relevant. Apple forced the carriers to change their business models -- first AT&T and then the others; Nokia et al just played along. I was really disappointed that Google did not persevere with selling their own phone -- there was this shining moment where it looked like Apple and Google would unite to tear down the carriers; I think Google's shying away from customer service may well prove pivotal -- e.g. it's going to be hard to sell cars if you don't do customer service.
People often point out that Apple doesn't really understand the cloud or that Amazon's forays into hardware have been at best mixed. Maybe. Google -- alone of its rivals -- doesn't understand end-users (its customers are advertisers). Which is more important in the long run? (Bezos may suck as a boss, but Amazon's customer service is first-rate.)
> Not really sure this is relevant. Apple forced the carriers to change their business models -- first AT&T and then the others
1. didn't see it then, don't see it now. Then again I'm in europe, mayhaps US carriers were significantly more overbearing. I don't recall carriers being good friends of the consumer though so not overly likely
2. this responds to a comment specifically mentioning allowing "an open mobile internet". Again this might have been because I was in Europe, but as my comment notes I did have access to an "open mobile internet". The experience sucked but technically there it was.
> 1. didn't see it then, don't see it now. Then again I'm in europe, mayhaps US carriers were significantly more overbearing. I don't recall carriers being good friends of the consumer though so not overly likely
The U.S. was indeed terrible: very restricted competition on phones (two year contracts, models often specific to one carrier, etc.), prohibitively high data rates and extreme barriers to entry for anyone trying to build a phone-based business.
Around 2000, I worked for a web company in San Diego where Qualcomm was trying to build a stable of companies developing mobile apps to boost demand for better hardware. We looked into the business model and each of the U.S. carriers wanted roughly $50K simply for the privilege of listing your app in their store plus a generous cut of the sales, should you have any and that number wasn't fixed but based on what they thought you could afford so our better-known clients got estimates well into 6 figures, again simply for being listed.
Unsurprisingly, we looked at WAP instead but it was so limited in practice that we gave up and nobody I knew used WAP or J2ME apps. As far as I could tell, that only changed when Apple had enough clout to get AT&T to go for a smaller slice of a much, much larger pie.
> The U.S. was indeed terrible: very restricted competition on phones (two year contracts, models often specific to one carrier, etc.), prohibitively high data rates and extreme barriers to entry for anyone trying to build a phone-based business.
For the most part that was also the case in Europe, though countries started cracking down on some of the bullshit fairly early (e.g. the ability to move your number from one carrier to the other and requirements that phones be unlockable after some time). I don't know about the phone-based business aspect.
The big thing which I believe really killed business options, though, was data pricing. Before the iPhone made affordable unlimited plans the expected default, it was assumed that anyone who wanted data was a business user on an expense account and the pricing was so high that everyone I knew treated app usage as an emergency-only measure rather than a part of daily life.
I remember reading a blog post by a mobile dev manager saying that it was was shocking that the iPhone didn't support J2ME and that Apple would inevitably be forced to support it by market realities. Good times.
Everybody knew were mobile telecom was heading ever since data became possible: downhill race to a commodity. You only had to look at Northen Europe or Japan to see how usage had changed. Everybody was scared.
The idea that the telco controls what software you run was unknown there. Data is data. You could almost see operators taking the first steps from inflexible contracts where they gave phones away, to leasing-like models (for business customers, consumer markets went though the same change a few years later).
The only surprising thing is that Apple didn't change telco business models more, but I guess they didn't have to. US telcos continued to sell phones through their own channels and didn't have to yield to bigger data plans until years later.
But change is coming, because data by nature wants to be dumb. Soon all customers will find the idea of operator controlled phones just as foreign as some did ten years ago.
I did some playing-around with WAP for my Moto RAZR. Turned out the phone could only store a page stack of about .. five. This varied with page complexity, so you might only get 3 deep before it locked-up the phone.
I remember going to a trade show in London in 2001 called "m-Commerce World", which was all about the impending revolution in mobile transactions and payments.
At the time I worked for a web development company that identified itself as an "e-business solutions agency". It was in the process of collapse following the dotcom bust, and we were looking out for any new opportunities to cling to. I remember feeling a bit second-rate at this trade show when I introduced the company I worked for, because we were only "e-business" while the exhibitors were very proud of being "m-".
The phone makers at the time really had no idea how to make a full-featured mobile computer OS. Nothing happened until Apple giving them a kick and Google saved their butts by handing them Android on a platter.
You are forgetting about Windows Mobile for Smartphone and Symbian. There was a reason to expect smartphone revolution - because early adopters had smartphones in 2004.
Yes, but they were utter crap. Their browsers were crippled, side-loading software was strictly for geek hobbyists only and their apps were generally appallingly designed. Mobile internet was restricted by the carriers to a teletext-style menu driven system with menu space sold to the highest bidder. It was awful. They had no idea what they ere doing. Mobile payments were supposed to take over the world in 2001+. The carriers have spent the last 14 years utterly failing to execute on that, and even in 2004 they'd spent the previous 3+ years being utterly incompetent. Steve saw what was going on, and he was completely correct that even 5 years wasn't going to be long enough to sort all that out.
Windows Mobile back then had a fully featured browser. It was all pen driven, but it actually worked really well. The first iPhone seemed like a step back at least as far as the browser is concerned.
Data was just extremely expensive, but it was faster than the dial-up internet most of the country was still stuck with back in 2004.
Windows Mobile had a start button! Microsoft was literally shoving a desktop mindset into any form factor, and not designing for a form factor.
Windows Mobile sucked not because of WAP, but because you had to use it like a tiny desktop UI and poke at it with a tiny stylus to move a mouse cursor (depending onthe version).
The iPhone wasn't awesome because it had a "full" web browser or whatever (though it helped). The iPhone was awesome because it was a pocket sized super computer with a UI/interaction model that actually worked
WM6, when designed to knock off Blackberry devices, was a little better. The "Start" button brought you to an application tray, and it had a 4-way rocker (and optional resistive, stylus-input touchscreen) for navigation. MS took some obvious hints from RIM, but they did deliver a good product.
Of course, it released roughly seven months before the iPhone, which beat the pants off of it. At least it could play NetHack...
No, that was in 1999. The mobile browsers in 2005 were built on Webkit (originally from KDE), and had identical rendering as the iPhone. The latter came with a better zoom, but it was the flent UI that caused the greatest stir.
"It was all pen driven, but it actually worked really well."
Steve Jobs was right in that no one wanted to use a pen (stylus). As the market matures, this is changing, and the stylus is regaining importance at least in certain niche areas. However, for the vast majority of the mass market, your statement was, and still is, a contradiction.
I said that it worked well back then. I didn't say anything about the mass market. So I didn't contradict myself. I think that you skim-read my post ;)
But, on the subject of the "mass market"...
What Apple/Jobs did was build an awesome device for media consumption.
Up until then we'd been talking about the PDA - a productivity device. It needed a physical keyboard and stylus input to make it "productive".
The best PDA up until then was the Palm Pilot - it has exceptional writing recognition - if you spent two hours learning their alphabet. It works for email, spreadsheets. Battery life was excellent. Most people I know used it with a fold-out keyboard when traveling.
Apple realized that 99% of the time you'll using your phone to play games, read, listen to music or watch videos. Their touch screen was fantastic - it took just seconds to learn how to use it. Sure, writing emails was a pain (even with the smart keyboards), but you'd only do that rarely.
So yeah, you're kind of right: no-one whose main use-case is content consumption is going to want to use a stylus. For content creators however a well-implemented stylus is a big sell.
That's why Microsoft are throwing so much weight behind the Surface Pro, and it's also the main reason people are buying it.
It takes me 100x less effort to shoot, edit, & upload a photo or video using an iPhone compared to the first $8000 Adobe Premiere PC rig I used to learn video editing, let alone comparing to the nearly useless PDAs (in this regard) of their day (and I loved my Palm Pilots & Visors). Voice-to-text transcription today is immensely easier & more accurate than writing with Graffiti input.
Instagram alone is evidence that there are immense numbers of people creating massive amounts of content on their touchscreen smartphones. Spreadsheets are super-niche content. Text, photos, & video are mainstream content.
Compared to email on the desktop during the PDA era, I'd bet people communicate with other people much much more now via their smartphones, whether that's through email, social networks, instant messaging, or video chat.
The reason iPhone was designed for this usage pattern probably has to do with the iPod, with which they owned mobile music consumption for half a decade. A PDA would have been a radical re-think.
Fans wanted a phone in there since day one. The only reason it took them six years to do it was Jobs commitment to getting it exactly right (or "his way" if you wish) which meant technology had to become available.
I tried two generations of Windows Mobile devices. Aside from anything else their battery life was abysmal, and when they ran out of juice they failed pretty catastrophically.
...and there was a browser on the Apple Newton (and, I'm sure, others) a decade before that and Steve Jobs (and, I'm sure, others) talking about developing smartphones a decade before that. It took >23 years of iteration, not 3 years of iteration, to finally reach a hw+sw+service combination compelling enough to actually revolutionize the industry.
I was working on WAP at the time (1999/2000) as well. And then a colleague brought back an i-mode[0] phone from a business trip to Japan.
It was small, had almost a week on standby (if you used it as a phone, which -- before the smartphone revolution, most people did). And it could actually browse the internet properly - gateways had to do some conversion, but unlike WAP, it was working with a real HTML subset.
It was that day that I understood the death of WAP is imminent, the mobile revolution is coming, and that HTML is here to rule mobile as well - because the future was already there, just not evenly distributed.
Prediction is hard, especially about the future, but describing the unevenly distributed present is easier.
It's especially important for blog posts like this to have a very obvious timestamp on the page somewhere. This is a contemporary post but I couldn't verify that until I went into the archive view.
There is a worrying trend in minimlaist design lately. It seems that more and more blogs are leaving out information about the author (less impotant here) and when the post was created, two very important pieces of information!
The same happens on a large percentage of open source projects. Dear project maintainers, if I go to your website, one of first things I want to know is when the last version was reased and perhaps what's new in this rease.
Sometimes you have to dig deep into the online docs to find this info.
Yegge has always been worth reading, which makes it sad that he's been basically silent ever since 2012 (about a year after the infamous "Platform" rant).
Earth to Major Yegge, Earth to Major Yegge, do you copy? Come back, we miss you.
Howdy! Boy, HN is terrible at recovering old accounts, so I made this new one.
I just posted an update to the predictions post, with my own scores. Take a look!
As far as blogging goes, I've been on a bit of a hiatus. There's been a lot to learn. The tech world is moving so fast! I've been in Google Cloud Platform for the past 3 years, and next year I hope to start blogging about the experience, and about the technology in general.
For now, though, I'm busy both with my day job at Google and with various personal projects, so there's not much to report just yet. I promise I'll be back someday. :)
I don't know when I gleaned it but it was definitely a culmination of my time at Microsoft, that building a platform is one of those painfully obvious things that everyone should be doing. I wrote a rant about it in 2010[1] thinking I was just putting out the obvious for lay persons.
WordPress has some really horrible backward compatibility baggage but the largest reason for it's success is that it's a platform. Being a platform is as simple as having an API to let others build on top. The platform API is like a bag of seeds that lets everyone do their own gardening using your seeds. This builds an ecosystem.
Accessibility when it comes to Platform APIs is also best illustrated with a simple example and should be clearly obvious to anyone who used jQuery, Prototype and the ilk. At the time jQuery offered method chaining and which other JS frameworks did not. And it also offered some really simple and well-named convenience methods.
$('#foo').append('OK!').show();
...returning 'this' in functions to allow method chaining was a light bulb moment for me after I saw how well jQuery did it. The jQuery plugin ecosystem stands on its own merit.
The great thing about building a developer platform is that it leads to an ecosystem of apps and plugins. And the great thing about an ecosystem is that it gets you venerable developer talent on a truly global scale.
I recall having a discussing many years back about why Microsoft enterprise licenses were mostly available only through certified partners. It struck me at the time that Microsoft was handing out 15%+ commissions for no reason while it could have adopted Apple's direct selling model. It turns out Microsoft had essentially build a retail platform where being a Gold partner unlocked top margins. You could find Microsoft certified partners everywhere. And this built an ecosystem. These certified partners were cross-selling and up-selling Microsoft products to no end, layering their own support on top and pushing for 3-year release cycles so the stream of money would get renewed vigour.
That famous Platforms rant was supposed to be Google-internal, right? Implication: he's still writing, but you have to be a Google employee to read it.
Google really does have the best benefit program in tech.
EDIT: I'm kinda joking, and kinda not. Maybe it seems silly, but this really will be on my mind next time I hear from a Google recruiter.
In a way, this validates the "ghost town" argument about G+. There might be tons of wonderful content in this or that "circle", but hidden away it just loses any appeal.
I agree. His writing was excellent. Along with John Carmack's .plan files, PG essays and Joel Spolsky's blog, he had a huge influence in me really getting into programming.
Really miss him. I even read his posts on subjects I didn't care about, like Borderlands or Emacs. Read them all the way through too, sometimes more than once.
Definitely my favourite tech writer. I occasionally head back to read some rants and have a bit of a laugh. Let's hope we see some more in the new year.
I was really sad that Lisp didn't take off like it appeared it might in the early 2000s. With Practical Common Lisp[1], Yegge's articles and Edi Weitz's software[2], as well as other articles and resources available at the time, it really looked as though the programming world was beginning to get why Lisp is such a good idea.
And then the focus became JavaScript, which has all of the bad and many—but not nearly all—of the good points of Lisp, and one key differentiator: it runs in the browser, on effectively every computer, tablet and phone on the planet.
Maybe the Lisp-compiled-to-JavaScript efforts will help.
The move in the center of intellectual gravity from Java/C# to Javascript puts me in mind of Guy Steele's quote about Java, back in the day: "We were not out to win over the Lisp programmers; we were after the C++ programmers. We managed to drag a lot of them about halfway to Lisp."
I think Javascript moves those Java programmers halfway further to Lisp. And while the Common Lisp revival movement sort of petered out, Racket (which is honestly much better than Common Lisp to my tastes) is still trucking along and might be poised to be the next step for a generation of programmers who've absorbed the Javascript mindset.
> And while the Common Lisp revival movement sort of petered out, Racket (which is honestly much better than Common Lisp to my tastes) is still trucking along and might be poised to be the next step for a generation of programmers who've absorbed the Javascript mindset.
While I personally like Racket quite a bit, I think that the Lisp most posed to pick up the JS crowd is Clojure/ClojureScript.
OTOH, maybe instead of a Lisp we'll get another halfway-from-where-they-are language for the JS programmers, and continue the Zeno's paradox of Lisp readiness...
I remember being very intrigued by the article back in the day and I revisited it at least twice to find out if steve ever confirmed what language he was writing about.
Today people seem to take for granted that the NBL was in fact javascript, but I still cannot find any source for this.
Does anyone know more and can point me in the right direction?
OK, fair enough, but there aren’t any other obvious contenders that fit his criteria. Also, ES4 was in development at the time and would have been a major overhaul to the language (something we’re now getting in ES6).
I've seen video of a talk in which Yegge says, in so many words, that "NBL is Javascript". Google suggests it was a talk at OSCON 2007, which I can't easily check right now.
He gave a talk not long after the initial post where he said something to the effect of 'You guys, NBL is obviously JavaScript!" I think I saw it on YouTube at the time, not sure if it is still available.
Totally unrelated, but I saw Steve present at OSCON probably 10 years ago, and it was amazing to see him present on his feet:
The video for his presentation was totally messed up, so his slides didn't load up. He just said "screw it" and went on presenting from memory to a room of around 4,000 people.
He nailed it.
Afterwards, Tim O'Reilly said he had never seen anybody deal with something like that and do as well.
If you get the chance to see Steve present, go for it. He's thought provoking and entertaining.
Unfortunately, blip.tv is gone. There's no sure indication that all the content will be migrated, but the odds are low - there was a lot of content at blip.tv. Luckily, most of it is archived and being uploaded to archive.org and will be available sometime.
Yes, the blog author was pretty generous. He got almost everything wrong if you're being at all critical.
The smartphone market is at least 5 years out. 3 years later iPhone comes out and defines the market. Saying it didn't get really, really popular for a year or two is misleading. If you were betting on the market being _at least_ 5 years out you would have missed that boat.
Do you not consider #1 to be at least close to accurate? Instead of XML we got JSON databases (NoSQL) which are very popular in some circles. They definitely do not surpass the number of relational database installations, but I wouldn't be terribly surprised if they stored more data in absolute terms.
The quoted reason is “Nobody likes to do O/R mapping; everyone just wants a solution.” From that point of view, I think it's at least arguable that he chose XML databases as a stand-in for NoSQL, which wasn't really a thing yet. The fact that many NoSQL databases use JSON, which is the successor to XML for data interchange, lends a bit more support as well IMO.
In other words, I think saying XML databases are the future in 2004 is like saying JSON document stores are the future now. I don't view either as all that correct, but there's definitely been a surge in interest.
I'd say that non-sql databases did indeed explode in popularity, so this does get the nod. However, I think the prediction may have gone a little too far. Have they surpassed relational databases?
I'm getting the impression that a lot of people are reconsidering the "nosql" approach, as well, sticking with (or reverting to) sql databases, including simple ol' mysql. This was an interesting interview that showed up on HN a few days ago.
I was never all that enthusiastic about "nosql", partly because personality wise, I tend to be reluctant about new technologies, and I liked a system based on relational algebra, all that. That said, "nosql" wasn't really meant as a NO to sql. I get the feeling that the term got away from them a bit, largely because "you-know-sometimes-there-really-are-better-solutions-than-sql-for-some-problems-and-you-might-want-to-consider-them" isn't really an option as a catch phrase.
That said, nosql absolutely did grow massively in popularity, it did move from the fringes to mainstream use, so as a prediction, yeah, I'd agree this should score a few points.
A flipped coin can answer a question, but it can't ask the question. You can flip a coin to answer, "will someone invent a new way for people to socialize online?", but flipping the coin does not grant you the insight to even ask that question.
> I try to avoid making predictions. It’s a no-win proposition: if you’re right, hindsight bias makes it look like you’re pointing out the obvious. And most predictions are wrong.
If you are right, then you can point people to your correct predictions. If you are wrong, then silently delete the records or just let people forget about them.
Yegge's "It's not software" from 2004 has a nice bit about microservices -- pretty ahead of the hype curve on that one:
> What if every "API call" were hosted by a different server? Well, you'd have a lot of flexibility in fixing any given call without needing to impact clients that don't use it. And it'd be easier to distribute it over multiple machines — it seems like it'd be a better design than the BEA WebLogic design, which is to run a few copies of your MonolithService on each of a handful of machines. There would be new problems to solve, but who's to say it wouldn't be a better design overall? We won't know unless we ask the question.
That's fun to go back and re-read now. He correctly describes "Shit's Easy" Syndrome, but maybe there's an opposing fallacy that's heavily status quo biased. Legalizing marijuana is in fact pretty easy, as bureaucratic undertakings go. Lots of his "what ifs" only make sense for something that's illegal in the first place -- What if illegal smugglers just pull up on the beach and start selling coffee? Why would they do that if you can just stop in at Starbucks?
I kinda see the point of it, that it is indeed complex, but it's not like we have to be paralyzed by complexity. Once the political momentum is there, you throw a dozen or so congressional aides in a room for a week and tell 'em to write the new law, and they'll probably come out with something that's at least a reasonable starting point. Revise it a couple of times, then put it in place and see what happens. It won't be perfect, but it's tough to be worse than the prohibition status quo. They can always fix whatever ends up being troublesome in a year or two.
That's the whole point of that list. Transitioning from illegal to legal is complicated in ways that don't affect things like coffee that have always been legal - and therefore, many people don't think about, claiming it will be treated "just like legal thing".
I understand the point of the list, it's just that most of the items on the list are not actually issues, or at least not difficult issues, and legalization has gone pretty smoothly in multiple states now. What he fails to take into account is that it actually is easy to just stop throwing people in jail for possession whenever we feel like it, and then take a year or so setting up a licensed legal distribution system.
It doesn't sound like you do understand the point of the list then. The point is not that any particular issue is unsolvable or even difficult. It's that laymen eagerly suggest naive solutions without even realizing that these issues exist. That's why you need to plan out your project and devote resources to executing it. Amendment 64 was not a single line "marijuana is now legal". The governor even set up a task force to oversee its implementation. And there are still complications due to its contradiction with federal law.
Generally speaking, NBL will have to have a much greater focus on performance than so-called "scripting languages" have had in the past. I mean, if it's really the Next Big Thing, it'll have to run on mobile devices, game platforms, and all sorts of other hardware-constrained devices.
It sounds impossible, I know. But it's not. Those compiler writers are a tricky bunch, and they have decades of experience. A bunch of them got together and did a lot of head-scratching over NBL, and the result is going to perform quite nicely.
> Interestingly, for all his insight, Steve Yegge doesn't understand why Java is popular and lisp is not.
What do you think are the reasons? Some folks might claim that Java enables mediocre teams of mediocre programs to turn out mediocre code, while they would turn out utter dreck in Lisp (we know that excellent programmers will turn out excellent code in Lisp, but there are business and operational risks associated with a small team of excellent programmers), but I don't actually know if that's true. Has research been conducted with large teams of mediocre programmers with mediocre management, using Lisp?
Let's be real: most people who make fun of Java aren't using a Lisp, and would also make a mess of themselves if they had to program in one.
We use Java at work. Obviously all of the following depends upon your use case.
The main reason we continue to use Java is probably legacy reasons. We have lots of library code. And we're a small shop, so every extra language we program in has a real cost associated with it.
The stability and backwards compatibility is great. We have some libraries we haven't had to touch in almost a decade, being used in apps we just wrote last week. Of course some of them we've updated to use newer Java libraries (such as java.time and java.util.stream in Java 8).
IMHO Java's biggest problem is the community. It's a bit enterprise-y. As long as you don't get caught up in that I think you're OK.
I think the other problem was it went so long between Java 6 and Java 7. With Java 8 there isn't a lot from the likes of Python and Ruby I miss anymore.
For imperative languages, I generally like Go and Python better, because I find them "funner". But once I get past myself, meh. I do think Go gets things a bit more right than Java, but not enough that it merits switching the language we use. Maybe if we had zero code and had to start out fresh.
> With Java 8 there isn't a lot from the likes of Python and Ruby I miss anymore.
Groovy will be feeling the heat here for its original use case. With the lambdas and terseness in Java 8, there's not a lot of reasons for it to exist.
I think it's to do with predictibility: back-compatibility, discoverability. You can get new coders who can get up to speed on a codebase. Business customers - who really really want things to keep working - value that. And business customers ultimately decide what is used. Similar to your point. BTW I'm not even sure expert lisp programmers can easily maintain their own code (or, perhaps more importantly, would want to).
Operator overloading: very cool, nice for matrices, complex numbers etc. But easily abused to create impenetrable code (you must first get across all the definitions). C++ had it, Java avoided it. Lisp is so flexible, it's like everything is an overloadable operator.
I'm not aware of such research on lisp; but the only successful lisp projects I've heard of are with hot-shot coders. BTW: reddit was originaly written in lisp (inspired by pg, who gave them the idea for reddit); they switched to python.
I suspect a lot of it is simply that it's one of the first "real" programming languages taught in universities and trade schools. People who get into software engineering purely pragmatically (their parents told them they could make a lot of money) but have no passion for the art simply default to it.
It's why I always ask about favorite programming language in engineer interviews. It's not that there's a right answer, but I do expect to hear some nuanced opinions. If the candidate says "Java" and then looks at me like I'm a little crazy for asking, I know they probably aren't a good fit.
Yes, but trade schools and unis choose it because it's popular in industry (it's not particularly popular with academics). It's not an ideal teaching language, because of all the boilerplate of objects and packages when starting with procedures. C is a better teaching language in that respect (python might be the best).
If you can afford it road-map wise, starting with the easy details and postponing the hard stuff works sometime. When the easy things are done, the hard things sometime become less hard, and the project got itself more rewarding so it's easier to see it in a good light.
If you ask 1000 people to ask a coin to identify the major trends in a complex ecosystem and make inferences about their future course... they will look at you oddly.
They don't have to ask the coin. They can just assign different meanings to heads and tails, flip the coin, and record the result as a prediction. If you have 1000 people do this, some of them will appear to be miraculous predictors just by chance. It doesn't matter whether they are talking about major trends in a complex ecosystem, or whether they believe they are performing inferences. Sometimes a correct guess is just a guess which turned out to be correct, rather than evidence of prophecy.
Because 40 years ago people were convinced it would be popular one day. And 30 years there was roughly the same percentage of highly convinced, vocal people who thought so. And 20 years ago same thing. 10 years ago ditto.
I mean, what is going to take? 10 more years? 20? At what do you conclude that a language really isn't useful for general purpose programming? (not to say it isn't brilliant and useful for specific purposes, learning, etc.)
You seem to define the reason as purely historical. Kinda like how Linux was going to take over the desktop "any year now" and it hasn't. Yet Linux is everywhere people don't think it is. It's on most of the world's smartphones, for instance but people still consider it "failed".
I think Lisp is the same: it's everywhere you don't think it is because it's opinionless and languages are defined by their community (and community is held together by the shared values and opinions). Look how many languages are rapidly adopting functional programming idioms. They're becoming Lisp. Eventually you start seeing the value of functional idioms in Blub language. Then you go and look at where it came from and notice, "Hey this is the real thing". Then you get real yourself. (This has been my experience).
How far off is prediction 9 really? What do sales matter when Apple takes more profits from their computer business than all those combined; Apple having a 30% plus margin and other computer makers a 2-3%?
Whether sales or profits matter depend on your perspective. I think revenue is a better guide than either.
Profit is really susceptible to distortion based on reporting. What exactly are you counting as costs? Do you include investments in the future? That's really a business choice, and it doesn't necessarily mean much.
At the end of the day, it's only the shareholders that really care about that profit. As a customer, it doesn't really matter whether that money ends up at the company or its contractors - you're paying either way.
As an investor, you (eventually) care about profit. If you're just trying to understand what the customers value enough to spend money on, revenue is a better guide.
Indeed. It seems likely that Apple could cut their margins and grab that 50% if they wanted to, but since that would reduce their overall profits why would they want to? It seems like Apple can at this point have pretty much whatever share of the market it chooses.
I'm not entirely certain whether their rivals actually value market share over profits as such. It's hard to run a company for long if it's not making money, regardless of what it's market share is.
Market share matters for a lot of companies because economies of scale mean that your unit costs go down as volumes go up. This gives a natural advantage to market leaders, in addition to things like increased brand visibility.
Market share is very important to pundits though, because it changes very rapidly and is often measurable throughout the year on a per-product basis, whereas company financials are less frequently reported and harder to tease out. A battle between products that readers can see in the shops and read reviews for is easier to create a narrative around than corporate financial performance. I think this is why sales numbers and market share are focused on so heavily in the media.
Yeah, that's probably why engineers work for MBA's or sales guys. Most engineers have no concept of marketing, aspirational branding, or human psychology.
Unfortunately, the one concept we have down is "make it up in volume."
If so, that's a dangerous game apple is playing. It's hard to predict the future, and it's by no means certain that it can continue to compete with the likes of android - being so much smaller than android meaning that they're playing in a pool largely shaped by others. They're doing well now - but who knows for how long?
That's like claiming BMW is playing in a pool largely shaped by budget car makers like Hyundai. Apple may have only had ~5% of PC market share for 15 years, but it's owned roughly half the total global profit share for about a decade, and their PC market and profit share has been steadily increasing during most of that period. And they've been doing exactly the same thing in the mobile business for more than 5 years as well.
One thing is certain. A guaranteed way for Apple to fail is if they start worrying about how other manufacturers are 'shaping' the market. It would kill them stone dead in a matter of a few years, tops.
Even if you include the ipad, it's still not true. The top 3 sellers nowadays have about 50% of the PC marketshare, and they're still IBM(but now called lenovo), HP and Dell. That's around million units - well in excess of ipad sales (mac's don't really make a dent). If you took the top 5...
And I don't think it's really reasonable to include tablets at all. The motivation is that they're a new, unforeseen development? But then you really should be including android tablets in those numbers too; and those happen to be sold primarily by other vendors.
At the end of the day, it's clear that tablets aren't laptops, nor do they look like they really replace laptops (especially not the ipad - you could quibble about microsoft's offerings, but those haven't been too successful yet.)
The heart of the prediction was that apple would come to dominate the laptop business. Clearly, it has not. It's still mostly windows-vendors. If you include tablets, apple does better, but there too it's a minority market share; android dominates.
I meant to say 100 million but the number got dropped between keyboard and chair, sorry ;-).
Best estimate is that there are around 200 million laptops a year sold (2014) based on a PC market of over 300 million and that laptops outsell desktops (worldwide).
Looks to me like tablets sell roughly at the same rate, around 230 million units. (I think it's not easy to find comparable numbers so maybe my data is off).
So given apples approximately 25%-33% market share of tablets that's just around 50-80 million ipads, which is still well below the top three in laptop sales (100 million).
For 2014 I see 230M tablets, 170M laptops, and recall that Apple is #4 in laptops as well. I think your figures are very shaky. In any event, Yegge's prediction is -- pretty shockingly -- in the ballpark. (Also bear in mind that most non-iPad tablets are in fact nothing like laptop replacements, and many are barely useful for anything or are dedicated bookreaders.)
I don't think Amazon really "understood". It's more that (according to "The Everything Store", anyway), accidentally, some emergency servers were reprovisioned as servers on demand for a bunch of frustrated devs who couldn't get resources the traditional way. And then demand just exploded and they thought why not sell it and see what happens.
It is true that Bezos appears to have a knack for spotting great products that appear accidentally in his org tree, and spinning it out as a separate team (e.g. the A9 team, Kindle...) but it might be that he thinks like a VC and backs everything that looks interesting instead of letting the bureaucracy kill it, and hindsight is 20/20...