I recently was told a story by an internal developer who works for a company that outsourced their IT infrastructure to IBM.
Their deployment process for a business critical java application is as follows:
- Someone calls IBM to alert the person (resource) responsible that they want to do a new deployment. If this person is sick or on vacation, try again later.
- Then they send over a jar file with the compiled application.
- The responsible person copies the jar file to the appropriate location and restarts the server.
The company I work for (that shall remain unnamed) signed a thousand million dollars, 10 year contract with IBM for IT infrastructure.
I was in need to log into a Mac Pro that ended up in one of the Server Rooms that IBM took over in our building.
I basically needed to connect a keyboard, log in and enable remote desktop so I could run a few XCode builds (I don't have a Mac, and I needed to test a few hybrid Sencha/Phonegap apps)
after several phone calls involving my boss and his boss, three meetings with five IBMers and 3 months (I kid you not) we gave up, and ended up borrowing a few personal MacBooks from friends to test the apps.
The Server Room was a few meters from the conference room, it was a 2 minute job, but we could not enter and do the changes ourselves without IBMs permission.
The word frustration doesn't even cover it.
(oh, and the 'copy jar files' thing? totally true.)
>>The Server Room was a few meters from the conference room, it was a 2 minute job, but we could not enter and do the changes ourselves without IBMs permission.
Sorry, But there are reasons why your dev team is kept out of the server rooms. Because generally when permission is given they go in and make big mess, which some one has to clean up later. Plus there are thousands of other issues like data security, theft etc that need to be taken care of here.
I've worked for a outsourcing firm here in India, we partnered with another outsourcing firm. While we were there our partner offered a lower hourly billing price and we lost most projects to them. We had a strict server room policy. They came in with all these policies of 'process elimination','cutting bureaucracy' - Which ultimately directly translated to cow boy behavior with total collapse in discipline.
After some time the chaos ensued. Dev's would just walk into servers rooms for even simple things collecting logs, or troubleshooting production software issues. Until some one accidentally pulled out a power cable while toying with a server. A downtime happened, followed by all kind of data consistency problems. It took quite a while, and a lot of beating from senior management for them to realize rules exist for a reason.
My respect for IBM just increased after I read your story. If I have to ever in the future outsource some IT infrastructure I may chose them.
All these examples are interesting - but why single out and pick on IBM? Seems like IBM is responding to natural competitive pressures in the technology consulting/services industry. Accenture, Wipro, TCS and others basically sell services to US companies and fulfill them overseas. It is not a glamorous business, and the work products are often shite, but this is what happens in price wars. Customers can blame themselves.
What is disappointing to read in the IBM case is yet another example of slashing lots of jobs in the US and shipping them overseas. And yes, a lot is lost in the process - it's not a magic one-for-one replacement. The older I get, the more the platitudes used to justify this ring hollow. So much action is driven to satisfy the relative greed of stockholders, and arguably not "necessary." America has forgotten that the health of the stock market != the health of the citizenry.
Technology services jobs should be among the highest value work we can do onshore. We can't all be bankers and lawyers, nor would it be desirable even if possible.
I don't fault IBM for the above anecdote - it's what the customer agreed to and in a way it was surely in the customer's best in best interest. The organization in question had long since stopped rewarding or investing in any sort of homegrown excellence.
Sadly, it's the same cycle all the way through - neglecting internal investment until you have no choice but to continue paying your minders on their terms.
The monicker "buying IBM never got anyone fired" is unfortunately still strong
Or, better, there are still a lot of, ahem, uninformed people that will overpay for their crappy service/software.
Anything that I've seen from IBM have been enough to convince me to never, ever spend one cent in anything from that company with one exception: Power servers
> uninformed people that will overpay for their crappy service/software
Those are executives over 55 who haven't kept up with technology.
The reaction among founders under 40 would be exactly the opposite: Contracting anything out to IBM would mark you as technically incompetent and a reckless spendthrift.
Here is another one I heard. IBM got the contract to upgrade Vodafone India infrastructure. Contract was valued at around $300 million. They in turn sub-outsourced it to Amdocs. However they were not able to deliver in time. Vodafone threatened to cancel the project. IBM project head in turn was planning to withdraw from Amdocs. Israelis called Obama who pressured Ginni Rometty to let it continue.
Bluemix is too little too late, and appears to be an advertisement for all the open-source 'cool' tech that IBM did not invent.
Let's not even get into the fraud that was the share-buyback (second only to Apple I believe). Now that's ended the tricky question of where that elusive revenue will come from looms its ugly head.
I'm pretty fucking scared about what a company in decline with an arsenal of patents like IBM's is going to do to hit its EPS targets in the next few years.
I haven't read Cringely before. Is he a trustworthy reporter, or a sensationalist? I will buy his book if he's a sober observer. I just hate these kind of books usually though, because they are often the product of sensationalists looking to make a quick buck.
His book "Accidental Empires" (http://www.amazon.com/Accidental-Empires-Silicon-Millions-Co...), and the PBS documentary miniseries that was based on it, "Triumph of the Nerds" (http://www.amazon.com/Triumph-Nerds-Bob-Cringely/dp/B00006FX...), are both terrific, must-read/see histories of the dawning and maturity of the age of personal computing. The doc is especially fascinating now because it was made in the window of time after Steve Jobs' failure at NeXT, but before his triumphant return to Apple -- so it provides a glimpse of him humbled and circumspect, which is a very different tone than that he took in nearly every other public appearance ever.
Cringely's more recent work has been kind of hit or miss, though.
OH WOW. He's that guy? That documentary is one of my favorite things of all time. Ok, just went to Amazon and ordered a copy; thanks for reminding me of this.
It is worth "reading the whole thing" for the anecdote about how a single well intentioned but ignorant employee almost killed Intel, along with the point he makes that technical companies are less ... solid that previous ones (my word, not his). Normally only a few people at the top of a Fortune 500 company can quickly kill it; in technical companies there are a lot more.
You don't make a quick buck on a story of the fall of a company. Business books, as a genre, are utterly dominated by success stories, because the people who buy them want to be inspired or to learn things they can imitate from winners.
Yes, that is flawed logic. But it's what people tend to want. So if you're trying to make a quick buck that's what you latch on.
If, on the other hand, you're following your intrinsic curiotsity you want to study fallen giants to see what can be learned from past mistakes. Now, none of that is sexy. "Fallen," "past" and "mistakes" are not words you tend to see on business book jackets. But some writers will do such a book anyway.
I'm not saying Cringely is automatically to be trusted here, but it's not accurate to say books like this are often written to make a quick buck. If you want a quick buck you write something glowing and positive and no one will complain.
It depends on who you ask. As a former IBM'er, I can attest that many of the employees I worked with turn to Cringely and the IBM Alliance [1] for IBM-related news and rumors.
> I haven't read Cringely before. Is he a trustworthy reporter, or a sensationalist?
I've read him since back in his InfoWorld days. For the most part I'd say "non-sensationalist", unlike say, Dvorak. I'd don't know that I'd classify him as a reporter, but more like a pundit. I mean, I don't look to Cringely for hard facts but more his take on the facts that we collectively know.
Anyway, were it the type of thing I care to read about I'd buy his book.
EDIT: "Crinkly"? Seriously, Android auto-correct? (Though to be fair, Mac OS X tried the same thing when I fixed it.)
As a very long-time reader of Cringely[1], he's been both spectacularly right and wrong in his predictions and his inside information. However, even when he's been wrong, he's given me a completely different perspective on products or companies or technology that I hadn't thought of.
Having read his many articles about IBM, he's very nostalgic about a "great" company that's failing. I think the book is passion project for him because he likes IBM (or liked the old IBM).
[1] I mean this Cringely, the famous Cringely, in case you're not aware that this is a pen name and that there have been a string of mediocre writers using the same pen name: http://en.wikipedia.org/wiki/Robert_X._Cringely
I haven't read Cringely before. Is he a trustworthy reporter, or a sensationalist?
It's kind of hard to say. I've been following his stuff since sometime in the 90's, and he's definitely brash and opinionated (at times anyway), but I don't know if I'd exactly say he's "sensationalistic". I'd probably take his work with a small grain of salt, but I don't see him as somebody who just "makes shit up" out of the air.
He's sensationalist, but he occasionally has some very interesting insights. I wouldn't take his word as factual gospel, but it will be an interesting read, and it may provoke you to think differently about the tech industry and look at other patterns you may have missed.
It's not even clear that this is reporting, as much as it's opinion pieces about a particular company that he thinks is doing things wrong.
A quote from the intro that seems like a bit of a red flag: "The book had to be written because writing the same story over and over for seven years hasn’t changed anything."
Uh, OK. I use "recent" to denote that I've only recently become an ex-IBMer, so my knowledge of the company's workings are perhaps more relevant than someone who left 10 years ago.
I work for IBM's new design group and honestly I'm pretty optimistic about the future of the company (and I'm a generally a pretty big pessimist/realist). Aspects of the company have issues but many of those are being fixed/changed.
Yep. I work at a big company that uses WebSphere products & they're pretty rock solid.
My only issue is a personal one (that I more like to pick and choose my own stack) so there are free technologies more amenable to that. But as far as creating a suite of business applications is concerned -- biggest problem I see is developers not knowing how to control WebSphere (auto-deploy, auto re-admin) which seems to be cultural rather than an issue with the software.
And FYI to all those complaining about the "send a jar to IBM?!!" scenario you should realize that sending around WARs/EARs is pretty commonplace in the java world. it's kindof like zipping a certain git commit, for all you dynamic folks. I agree it would be nice to have something simpler but corps are not in the habit of sharing their repos with each other and creating WARs/EARs/JARs is trivial. Every project has a build script for this already in daily use.... But our stuff is onsite luckily, i have no experience with outsourcing infrastructure to them.
From my limited experience working at IBM (7 years ago I interned for a little over a year), I think you're right. A lot of what the author writes is absolutely true, but I think he underestimates the power of a few good teams to lift the company out of something like this.
A ~300 page post mortem of a company that is still alive? This isn't Blackberry we're talking about. IBM has (as the author duly notes) huge financial reserves and can easily stay relevant for another decade IMHO.
It seems like markets and investors have been accustomed to the huge, meteoric rises of a small handful of star companies, forgetting perhaps that plenty of businesses don't need to become household names overnight to succeed.
One could write a similar article for Microsoft, really. I think they screwed the pooch with Windows 8 and Surface. They tried to protect both the Windows and Office monopolies simultaneously in the face of radical change, and lost both.
IBM has been the biggest player in enterprise IT since it was done with typewriters and adding machines. For them to become irrelevant, even if it takes a decade, is a huge deal.
(You might be conflating the startup community with the open source community of the late nineties, members of which did say that Linux was to "soon" overtake Windows in the desktop).
You might be conflating the startup community with the open source community of the late nineties
Considering the large overlap between those two communities, I'm not sure it's an unwarranted conflation. The first tech boom included a bunch of Linux-oriented startups and IPOs, whose successful pitch to VCs involved essentially, "Microsoft is on its way out".
True, although a decent number of the web startups also thought Microsoft was the walking dead, because it had missed the web (which was sort of true, but ended up not killing MS).
How was Office ever a monopoly and if it was, how was it lost exactly? I don't get it. It's not irrelevant and it's being used as before. If not, please provide numbers.
Yes IBM is still alive and will be for quite some time but they have truly sold their soul to achieve short term goals. They are definitely not the same company they were even ten years ago.
I figured people would push back but I really don't see how. Everything about this company sounds totally screwed. It's funny that you bring up Blackberry because what you're saying now is pretty much what people were saying about Blackberry in 2011 or so.
RIM has never been even remotely close to IBM's market/general-social influence. The comparison is certainly a poor one.
By this argument, we can safely say "X is dead, since RIM dies in < 5 years.". Substitute X with any company you can think of. This is obviously wrong, so your argument does not stand.
Gibbon’s thesis was that the Roman Empire fell prey to barbarian invasions because of a loss of virtue. The Romans became weak over time, outsourcing the defense of their empire to barbarian mercenaries who eventually took over. Gibbon saw this Praetorian Guard as the cause of decay, abusing their power through imperial assassinations and incessant demands for more pay.
We're not quite there yet. You don't see the Koch brothers trying to assassinate Obama. (No, attack ads are not the same.) You don't see a shooting civil war between Koch supporters and Soros supporters. And all that happened during the Roman Republic.
Moving on to the Empire, you don't see soldiers assassinating the president (or members of Congress) because they want higher pay.
I know that it feels similar, but we really aren't there... at least not yet.
>Moving on to the Empire, you don't see soldiers assassinating the president (or members of Congress) because they want higher pay.
Perhaps because that's not how politics is played in the 21st century in a Western country.
That doesn't necessarily mean the decline is not there -- it could be just as big as Rome's but different. For one it wont have much to do with "babaric hordes".
Agreed. We got a long way to go. Remember Rome, the Empire, lived on for some 500 years. And that's not counting the Eastern Empire after the first sack. And Rome was sacked a few times.
When you see local gangs taking control of areas of the country (in some cases printing their own money) we've arrived.
It's interesting that he claimed IBM has probably been doomed since 2010, considering that the stock price in 2010 was around $120, and it is now $187.06. I guess 50% increase means the company is dying.
These are just egregious cases. The stock market as an oracle is thoroughly overrated. BlackBerry, the late 90s market bubble, the mid-00s financial products bubble... stock price movement says less than you think it does.
Whenever you see many people (not just this author) talk about a company's "financial engineering" you should be concerned they might cross that line; couple that with an explicit pledge to double earnings/share to $20 by 2015 and the danger should be apparent.
I believe his premise is that IBM will do anything to improve revenue-per-share numbers, even if that kills their future. So from his perspective, good stock performance is expected even if you agree with him that the company is doomed.
Exactly. Based on reading some of his previous essays, plus many other sources that are seeing the problem, "In 2010, Rometty’s predecessor, Sam Palmisano, pledged that per-share earnings would reach $20 in five years, a plan called Roadmap 2015. (http://www.businessweek.com/articles/2014-05-22/ibms-eps-tar...).
So part of this policy is an explicit pledge of the company. How it's being achieved, or attempted, is of course another story, as well as, say, whether everyone should dump their IBM stock once it reaches that goal, assuming enough greater fools can be found....
If IBMs stock price were nothing more than uninformed public opinion, then it would have been plummeting for what, the past decade? Most non-technical people don't know IBM still exists (they nearly entirely disappeared from the general public's view when they stopped making consumer products); and most people in the trendy half of the tech industry seem to think that IBM is not long for this world.
Its a speculators market so applying logic is a fools errand. None the less, if you buy IBM you're trading a very liquid $185 cash for a very liquid share with tradition of paying a quarterly dividend around $1. Thats an equivalent interest rate "yield" of 2.40% or so. My local credit union is only paying 1.44% on a similar long term-ish investment. Its not completely worthless, but its being milked like a cash cow not treated like a tech company. The PE ratio is like "ten" compared to googles "thirty" or facebook which off the top of my head is like "a hundred".
Comparing the yield of this dead horse to a company that actually matters like Caterpillar or XOM its about the same, around 2.5%.
Of course .gov guarantees return of my capital from my credit union, but who knows what'll happen in the market WRT IBM, so it should return a bit more. CAT and XOM are actually worth something as a going concern. IBM, well...
So.. rate of return below real world inflation rate, no prospects for growth, no one in the market expects future growth (not paying a high price now for earnings later). Doomed. Maybe google or apple will buy them for pennies on the dollar? IBM has patents...
They've also employed a lot more "financial engineering" to make that happen too.
Bloomberg Businessweek had a feature article two weeks ago and one thing it highlighted was that the use of "non-gaap" in IBM's earning release increased 10 fold or something
BlackBerry sales were also up even after the iPhone came out. Even after the Android first hit the market too.
Now they're pretty nearly dead - and I don't think there's any doubt that it was iPhone and Android that killed them. Sales and stock prices often lag behind a real inflection point.
It strikes me that IBM has become full of the sort of people that can see how to cut fat from a business, but without any of the sort that know how to grow one, or at least if they are there they don't have much influence.
One of the most important lessons I learned in big corporate America was that in the end the cutters always lose, even if they can make small gains in the short term there is only so much you can cut. Upside is potentially larger, but finding it is the core of the problem.
Your typical software developer at IBM has 7 or 8 levels between him and the CEO. They're good at cutting, but I'm not sure "fat" is what they know how to cut.
It was, when, 1890, that the US Census took more than 10 years to process one census that is supposed to happen each 10 years? So, US Census needed a better tool. Enter H. Hollerith who borrowed the idea of punched cards from, maybe, the Jacquard loom.
Tom Watson was with National Cash Register or some such, saw the Hollerith equipment, concluded that it was the
secret to automating routine business record keeping, especially accounting, and IBM was born.
Watson sold the heck out of his equipment. And he had
a team of good electro-mechanical engineers in
Poughkeepsie designing and building the equipment.
That takes us to about 1960. They made a lot of money.
About then, Tom Watson, Jr. visited Columbia University
and reported that there was a guy there
doing things, whatever, 200,000 times a second.
An IBM life insurance customer in NYC complained
that the IBM punched cards were taking up too much
room. So, some changes were needed, e.g.,
magnetic tape.
Due to US WWII work on vacuum tube digital computers
and pushed by Univac, IBM got into the computer business,
with magnetic tape, software, etc.
In the late 1950s Watson was told that he had
no research division and that to have a good one
he needed a leader good researchers would respect.
So, Watson got a chunk a land in Westchester County,
NY and hired Eero Saarinen to build him a fancy
research lab on a hill near Yorktown Heights.
By the mid 1960s, IBM had an idea, a line of
compatible computers called System 360 (the
origin of the IBM 'mainframe' computers). So,
a customer could start
with a cheap, small, slow System 360/20, 30, 40, etc. and
when ran out of capacity upgrade to a 360/50, 65, 85, 91,
95, etc. and just keep running the same software.
IBM regarded itself not as an electronics company
or a computer company but a marketing company.
The marketing/sales people had nearly all the power.
Here the focus was still on routine business record
keeping, 'business machines', and not general purpose
computing. IBM had some good work in general purpose
computing, but the executives stayed with their 'good IBM customers'. So, when MIT did Multics in 1969,
IBM was slow to respond. When DEC was selling good
general purpose
time sharing on the DEC 10, IBM was slow to respond.
As a means of a time-sharing computer
to be used for developing operating systems,
IBM Cambridge Scientific Center did
CP67/CMS for the IBM 360/67 virtual memory computer
of about 1967 and where CP67 was virtual machine.
Lots of people loved CP67/CMS, later VM/CMS,
but IBM's marketing people didn't want to sell it.
Internally, VM/CMS ran the company
through at least 1994. Also there was VNET,
roughly like the Internet except the
computers were also the routers --
did essentially of the IBM internal
computing through at least 1994. Use it?
Yes. Eagerly sell it? No. 'Dog fooding'? No.
In the 1970s,
IBM did move to System 370, a 'mainframe' which had virtual memory.
Customers wanted to do interactive, on-line
applications, and IBM responded slowly.
For the communications, IBM did
Systems Network Architecture, which
was about as flexible and easy to install as a railroad
and about as costly. IBM did notice that
with much in on-line activity, they could
sell a lot more System 370 computers and did.
IBM was slow to let the capacity of the 370
computers increase as quickly as customers
wanted, but by 1980 IBM had
a collection of faster boxes.
As 18 wheel trucks lined up to
receive these computers, the
line backed up to the
NYS Thruway, and there was a
blip in the US GDP.
A lot of IBM customers were doing
'personal productivity',
e.g., word processing and electronic
spreadsheets, on IBM's mainframe computers.
But that was 1980, and DEC,
Data General, Prime and others
were doing well. Prime
was a single board, bit-sliced,
virtual memory computer with
some extra register sets for
fast process switching
and a darned efficient time sharing
box, much more efficient, and much
easier to use, than anything from IBM.
And in 1980, Prime gave the best
ROI on the NYSE.
Also about here came the microprocessors,
e.g., Intel 8080 and 8086. Then
PCs began to explode. Then work
started moving off IBM's mainframes.
By 1986 or so, DEC was getting more
revenue from DuPont than IBM was.
There were well done IBM internal
reports on technology and markets
that outlined the future and the
threats to IBM's business, but
IBM's leadership, really successful
mainframe salesmen, essentially
ignored the reports.
The mainframe people had the
power and worked to kill off
anything else inside IBM.
At one time, IBM CEO Gerstner
said that "IBM is the most
arrogant,
inwardly directed,
process oriented
company I've every seen.".
By 1986, at an internal top management
meeting, it was possible to go around
the table and find that nearly no
one had made their projections.
The conclusion was that God
had ceased to smile on IBM.
IBM laughed at Intel and Microsoft,
and those two came to have by far
the last laugh.
For a while IBM's Cocke's
work on reduced instruction
set computing (RISC) gave IBM
an opportunity to grab the
high end desktop and workstation
markets, e.g., for finance,
engineering design, graphics,
but the mainframe people didn't
like the competition. E.g.,
when an IBM mainframe had a
processor clock of about 10 MHz,
Cocke's discrete component board
with RISC had a clock of 80 MHz.
Near 1994, IBM in three years
IBM lost $16 billion
and went from ~400,000 employees
down to ~200,000. The research
division phone book went from
4500 names down to 1500
with about 500 of those recent temporary
employees.
Then IBM pushed services,
e.g., they would run
your data center for you.
IBM bought companies with attractive products and put the
IBM marketing force behind those
products.
Net, the first and last 'visionary'
thing IBM did was to grab
Hollerith's work, although, of course,
it was Hollerith who was the real visionary.
Since then IBM has focused
on selling 'business machines' for
routine business record keeping
for large banks, insurance companies,
manufacturing companies,
and governments.
That's their 'market'.
They haven't tried very hard
to look for other markets.
In technology, IBM lets others
take the first steps and
maybe does something similar or
maybe just buys a company.
That's what they are still doing.
IBM had a lot of opportunities they
just dropped, apparently deliberately:
They could easily have had all of Cisco,
Intel, Microsoft, and Oracle. At one time, they
ran NSFNet, that is, the Internet;
had IBM stayed with that work, they could
have be running some huge fraction of all of
the Internet. IBM
was long a leader in laser printing,
e.g., for printing bank statements on
a roll of paper moved with a fork lift truck,
but
in the US HP made big bucks in that market. In the 1980s IBM saw the
need for video servers and had some working;
same for wearable computing. For the
Internet of things, IBM has long had the
TCP/IP stack on a chip. For relational database,
IBM invented it, roughly parallel to E. Wong's work
at Berkeley, but now the revenue for
relational database goes to, right, Oracle, etc.
IBM did early work on the tricky
'passive' amplifier wrap around an optical
fiber to amplify the digital signal
without converting from optical to
electrical; likely that amplifier is
crucial for the backbone of the Internet,
but IBM is not running that backbone;
maybe IBM is getting some patent license
revenue.
IBM did good, early
work on giant magneto resistive
disk heads and associated vertical
recording and long was the leader
in magnetic disk, but now people
buy hard disk drives from
Seagate, Western Digital, Maxtor, etc.
IBM did a lot of high end work on
large disk storage systems,
but people buy from EMC, build their own, etc.
It goes on and on: At one time or another,
IBM had in their lap the beginnings of nearly everything
we see today in information technology but
dropped it.
IBM remains focused on being a
'marketing' company, but now they
are a bit short on what to market.
Ah, yet another mainframe salesman
bites the dust.
Jumping on computers was "visionary" in the same way as Hollerith's work, but perhaps more so, with various complications like an antitrust suit to help jump start it. But they did a lot of interesting things in that period, seriously pursued scientific computing for a while (was one of the biggest markets for some time), made relatively affordable machines like the 650 (main memory a drum, first mass produced computer per Wikipedia) and 1130 (360 technology with some very clever hacks), certainly innovated in computer languages (FORTRAN, maybe PL/1). Taught us a lot about how not to write software (The Mythical Man Month), but were hardly unique in that, or the second system syndrome.
In a pattern I observed at the time, companies that really screwed up their disk drives, as in too many failures in the field, lost so much brand equity they were forced to sell the remnants to another company. Or at least this is how I interpret IBM's sale of their drive unit to Hitachi in 2002, and Maxtor's to Seagate in 2006.
Anyway, the point being that for disk drives IBM did worse than the pattern you describe above, this was an severe execution failure.
Although that also seems to be happening in services.
You make good points. I typed ASAP. There are a lot of
possible "quibbles" with what I wrote.
You are correct about the 'vision' thing -- Tom, Jr.
did a very gutsy thing pushing out System 360. He
essentially bet the company. By then IBM was
well into computing, e.g., the 7094 based on
transistors instead of tubes, but 360 was
at least two giant steps more.
But I do remember
that somehow about then DEC was able to do
the PDP 10, nice system -- hmm. Dartmouth
was able to do the system that GE used to
sell time-sharing, etc. MIT did Multics.
Net, others were able to write OS software without
betting a Fortune 10 or so company on the work.
You are fully correct about the anti-trust suit -- in
a sense it made IBM 'gun shy' and timid for a long time.
As I recall, long IBM sold the main chips
used by Cisco and Juniper. I can guess that
IBM gets patent licensing revenue for
disk head technology, and maybe the heads
themselves. But the big bucks are in
selling the drives and the
subsystems.
I forgot about the sale of Maxtor to Seagate.
I guess my broad point was that IBM was essentially
always a 'marketing company' run by successful
mainframe salesmen. E.g., once, the day after my wedding,
I got an offer from the IBM Chicago branch office, and the
head there gave me that description of IBM. They wanted
me to hold the hands of the oil refining customers using
linear programming to make decisions on what inputs to take
and what outputs to make -- big bucks in that, still. They had sold a 360/85 and likely wanted to have sold more. Gee, I might have gotten to work with Ralph Gomory, later head of IBM Research.
I guess part of my 'broad point' was that they kept
throwing off their plates little opportunities like
Intel, Microsoft, Cisco, Oracle, Seagate, EMC, operating the whole
Internet (can you believe that!), Yahoo (IBM had
Prodigy, good idea, bad execution), OS/2 (ahead of
anything from Microsoft until Windows NT or 2000),
Netscape (IBM had a decent Web browser early on),
Web servers ("ah, what's to do with an Web server;
trivial, right?" -- build one easy to program and to serve 10,000 pages a second and then tell me that).
PL/I? Yup! It was done by a committee headed by
George Radin in about 1964. I used PL/I and CP67/CMS to
do the first computer based scheduling of the fleet
at FedEx. Nicely enough I was paid well
to learn PL/I by the US DoD at the
JHU/APL for some work on passive sonar and the
FFT. Once I tried to talk to Radin about
operating systems, and he said, "Three times
in my career I tried to help IBM in operating
systems, and three times I broke my pick trying.".
I remember the sale to Hitachi but didn't know that
the IBM drives sucked. I've heard only good things
about Hitachi drives; sorry I didn't mention them --
another quibble.
IBM had object oriented programming in
microcode as part of 'Future Systems'
in 1980 or was it 1970?
But, they were a 'marketing company' selling to
'good IBM customers'.
> Although that also seems to be happening in services.
I can believe that. Long IBM had their pick of
the job pool something like Google seems to today;
likely no more.
And it was a fairly ugly, long, and drawn out second system syndrome experience; rather famously Bell Labs dropped out of the project but the experience inspired UNIX(TM), which is Multics with some vital parts missing. Delivered much later than planned, but it was a quality system. It very possibly was the only system of the ones you mentioned that had scope on the order of OS/360 and TSS/360 (5 and 1 man millennia) ... but it wasn't done with the commercial pressure on those IBM OSes, and of course embraced virtual memory vs. disdained it.
Multics and the PDP-10 were both crippled by 1 MiB address space sizes (1/2 of 36 bits, word addressed, the total for the PDP-6/10/Decsystem-20, per segment for Multics, which was much less limiting, mostly a nightmare for big data sets).
I'd quibble OS/2 was also a failure of execution, driven by marketing. IBM told IT managers that the PC/AT was the last PC model they'd have to buy for a long time, but the 286's protected mode was horribly misconceived (changing a 64 KiB segment incurred a terrible performance hit). Therefore early OS/2 had to run well on it ... but really couldn't all that well. Whereas Windows 3.0 hit a lot of niches very well, and the rest is history.
Somehow I don't see Cisco, very much not part of their DNA ... but then again, that's your whole point.
Future Systems would have been circa 1970, it was the ambitious 360 follow-on (vs. System/370 which was 360s with ICs and DRAM). As I understand it, that group eventually gifted us with the very advanced AS/400 et. al. Like Multics, every file is part of the address space, it's very neat and worth studying.
Anyway, thanks a lot for the insights and stories you've shared. I got my start with the IBM/1130, but then it was UNIX(TM), watched but didn't really partake of the by then doomed Multics (Honeywell was horribly managed, blamed a project failure on the decision to microcode the machine and then tried to compete with 1 MPS async processors through the '80s), PDP-10s and Lisp Machines, followed by an unending sequence of UNIX(TM) and Unix alikes. Bleah, when you know we can do much better.
But it didn't have to be that way. IBM certainly had a chance to win my heart and mind with their systems in my home town's college (the 1130 and a 370/115), before I got exposed to better systems when I left.
The story went, some guys at Honeywell on Multics
went to management and said, "We believe that we can
bring up Multics on a super mini computer, sell it, and
make money." and management said, "We don't believe you
can bring up a Multics on a super mini computer;
if you did, it wouldn't sell; even if it did, you
wouldn't make any money.".
So, those guys did Prime. The OS was written in a
slightly tweaked Fortran. I ran two of them and
did my dissertation computing on one of them. For the
first one, we were doing some DoD analytical work on
TSO, and our two programmers were spending $80 K a year.
We got a Prime for $120 K, and our computer usage
went through the roof. We just copied over our
500 K TSO Fortran programs, compiled, and ran --
ran fine. That Prime was just for our group of
40, but soon the company of 300 wanted one, and
I served on the selection committee. For the
second one, I got that for a B-school as a prof.
There the Prime made the central computer
group's Amdahl 470/V6 look silly, and I served on
the committee to select a new university CIO.
At one time I had a summer job, and at first
I was to program an 1130, but later the job
was for me to design and build some
DC power supplies to power some IBM tape
drives IBM had given away to a research lab.
Software Arts, as in Visicalc, used Prime computers. I see from a biography of Dan Bricklin, the less technical of the two, that:
Prior to forming Software Arts, he had been a market researcher for Prime Computer Inc., a senior systems programmer for FasFax Corporation, and a senior software engineer for Digital Equipment Corporation. At Digital, he was project leader of the WPS-8 word processing software, where he helped to specify and develop one of the first standalone word processing systems.
And the more technical partner Bob Frankston was an MIT type. I remember one Software Arts employee, friend of one or more, or maybe Bob's youngest brother, who was a friend of mine, mention that among other things they appreciated what the system adopted from Multics.
Being bit-sliced, the Prime micro-architecture you mention was microcoded, which of course would fit with their following the example of the successful System 360, many models of which had to be microcoded because the logic family they used pretty much had only one speed, they made micro-architectures narrower, down to 8 bits as I recall, for the slower machines, and the 2? fastest had none.
Honeywell's rejection of microcoding helped made Multics and GECOS systems terribly uncompetitive, at least by the time Visicalc was being developed, although the macro-architecture allowed you to easily hook up 6 CPUs in one system (and 8 with a horrible kludge, as I recall).
Thanks. I covered a lot of ground quickly, and
there are many rough spots, but maybe my
main point that they were a 'marketing company'
selling to 'good IBM customers' was
appropriate.
> Only bloggers have the patience (or obsessive compulsive disorder) to follow one company every day.
> I wrote story after story, [... but] I was naive. My hope was that when it became clear to the public what was happening at [...] that things would change.
"But this week's "job action," as they refer to it inside IBM management, was as much as anything a rehearsal for what I understand are another 100,000+ layoffs to follow, each dribbled out until some reporter (that would be me) notices the growing trend, then dumped en masse when the jig is up, but no later than the end of this year."
The path of HP, Cisco, Dell, IBM and all has been pretty clear for awhile now, and Cringely doesn't add any insight that isn't obvious. Scale up is drying up and there is no pot of gold in scale out.
Your turn at being a commodity is coming, and that prediction is more accurate than anything Cringely has made.
Yeah they're still making tons of money but consider that they haven't grown revenue in years, all their best talent is leaving in droves, they're selling off everything that isn't bolted down, and they're betting the future on cloud and Watson. They're late to the party for cloud (and in typical IBM fashion, overpriced) and Watson ain't all it's cracked up to be. They have massive problems going forward.
I can tell you’ve been at IBM too long, because you say “cloud” rather than “the cloud” or “cloud computing”.
I have a friend who sends me quotes from their work sometimes, like “Good clarity on Data, Cloud, Engagement! Loved your passion about IBM’s competitiveness with Cloud!” or “My key takeaway is to evaluate how I can use information to stay engaged in analyzing data!”.
Whatever are they so happy about, I wonder. Cocaine stipend?
I find that surprising; of all the systems we're maintaining (hospital), our AS/400 is the one system that's been pretty much rock solid. I've generally been a fan of IBM's hardware.
As a sysadmin, I can't think of anything more exciting then the Power and Z stuff but keeping that hardware around is not realistic anymore for a lot of companies. IBM kind of screws up with prices, contracts, service level, etc.
Disclaimer: I have been on both sides of the fence.
Yes, but when was AS/400 first introduced? 1988. Do you think that twenty-five years from now, people will say the same thing about the new IBM products they purchased today?
Their deployment process for a business critical java application is as follows:
- Someone calls IBM to alert the person (resource) responsible that they want to do a new deployment. If this person is sick or on vacation, try again later.
- Then they send over a jar file with the compiled application.
- The responsible person copies the jar file to the appropriate location and restarts the server.
I rest my case.