I think we underestimate how much time is required to be really good with computers. I'm very knowledgeable about computers, but I've been using them heavily for 22 years.
We have this innate comfort and familiarity with using computers, but as hackers it's a huge part of our lives. People have other ways of life than us, and have other expertise. We shouldn't dismiss their computer illiteracy as stupidity; it's just as bad as them writing us off as "computer nerds".
There's also this tendency for non-computer-literate people to be overly self-deprecating. I noticed this with mathematics, when I tutored people @ uni, but with computers it's the same shit. I constantly hear "yeah, I'm awful with computers, it's all black magic to me". But the person saying that is often not even trying, which is frustrating. It's like a helpless excuse to be lazy and let someone else do the work, which is often not that complicated.
I don't know about that. There seems to be something innate about one's ability to use computers.
My father is 85 and is quite good with computers; he uses email constantly, writes documents and books using word processors and understands the concept of files, reads on a Kindle, etc. But he had never touched a computer before he was maybe 70 (let alone own one), and before that he wasn't technical at all. He was trained as a Latin teacher (!) and spent his professional life in politics, never having to type anything by himself.
My father-in-law is very good at many manual tasks including woodworking, electricity, electronics, and was an officer in the French army for many years, where he was responsible for maintaining highly technical weapons. He too is 85 and he can't seem to understand computers, although I have known him for 15 years and have been trying to help him for that long. He seems to want to learn but isn't really interested in what one would call "the big picture", he just wants to know a list of actions that will get them to where he wants to go. Of course that approach fails miserably whenever something happens that's not on the list.
I have two friends who have been to top engineering schools who, I discovered recently, type whole urls not in the browsers' bar but in Google's search form. My eldest son is 11 and doesn't confuse the two. My younger son is 7 and doesn't seem much interested.
I agree with this assessment. It has a lot to do with whether the person has the basic ability to understand computers and engineering-like things. If it's not taught at an early age it's tough to pick up this skill later. My parents are perfect examples of this.
My mother is a graphic designer, old enough to have done page layout with a knife and glue. She learned Quark, then Freehand, then In-Design and still uses the stuff to this day. She was not raised with a computer, but her father was an engineer and she has that engineering mind, where she can follow something back to its root and figure it out.
My father, on the other hand, is a Dale Carnegie teacher. He's a people person. He's had a computer since we got an Atari ST in 1985. He's been on Macs since 1987. He STILL cannot use the thing properly. Sending an email is hard for him. Using his iPad is tough. Facebook enrages him because "I don't know any of these people, how do I get rid of them, I didn't ask to see their pictures!" My dad was raised by a photographer and a nurse. He never learned how to troubleshoot. He cannot find a root cause.
But he can hold a room full of people captivated and make them laugh, something my mother would NEVER want to do. So, it's those people skills versus those tech skills, as ever it was.
Aptitude, yes. Some of it is what you learn first, and therefore what you must unlearn. Ditto terrible mental models. Ditto tools poorly designed for the work. Etc, etc.
All this has been exhaustively hashed and rehashed by the UI/UX, cognitive psychologist, designers, SIGCHI, etc groups for decades.
Typing URLs into the search bar of Google is very common. I see it all the time. If google is your default new tab page there's really nothing wrong with it. One extra click maybe.
It does imply that the user is not (sufficiently) aware of the difference between a browser and a for-profit search engine. It's a type of behaviour that makes people dependent on a service where no dependency is needed. Also, how many of these users are both aware that they are sharing large parts of their browsing behaviour with Google and are okay with that?
Well, most browsers are for-profit too. Typing an address into Chrome's address bar will still send it to Google so they can give suggestions back, I think. (E.g., if I type "http://www.abc" into the bar, it suggests an autocomplete to "www.abcnews.com", which I have never visited. Doing the same in a 'Private Browsing' window turns off this autocomplete.) I'm not convinced that distinguishing between address bar and search field is either a necessary of a sufficient condition for understanding Internet privacy.
It does have the nice feature of spell checking for you :-) It's a good bit safer to search for your bank site, for example, since there're large incentives for phishing sites to sit on common misspellings of bank domains.
There are conflicting incentives for search engines (Yahoo seems much worse than Google for this) to allow paid results for phishing sites above the real result. I can often tell exactly which phishing site a user is getting directed to instead of the real domain that they want.
I seem to have an innate talent for computers. Given a "clue" I can usually extrapolate most of what I need to know. I may be "smart", but my memory recall is horrible -- making connections is very difficult to me given too much time displacement. The extrapolation is mostly automatic w/o actually "thinking".
I was just thinking about this recently while installing something with radio buttoned install options:
Standard Install (Recommended)
Custom Install (Advanced Users)
Yet the custom/"advanced" option was just a couple checkboxes that let you exclude extra bloatware during installation.
I wonder how many users upon seeing both options tell themselves they are not "advanced" computer users and just go with option 1 without even looking at option 2. Had they bothered to look many of them would have understood to uncheck the bloatware they know they don't want. If that was too much there was a Back button. But the wording "Advanced Users" scares many ppl away from even looking into that option and so they end up with some Yahoo! search toolbar forever after...
Self-deprecating computer users is an issue. The extent some software companies are taking advantage it is another.
I'm constantly annoyed by "Advanced user" installation options.
They're occasionally accurate - I can think of a few programs where the "advanced" option demanded that I go and read a bunch of documentation - but mostly it's either laziness or malice. I don't think they've been a reasonable choice since storage was so limited that 'advanced' meant screwing around with DirectX directories and partial installs.
In 90%+ of cases, basic/advanced could be better addressed with a handful of simple-English checkboxes for what to install. One is locked-active, for the core bundle. The others have one-line descriptions of what they do and possibly what other products make them redundant, maybe with a "more info" button.
And since evil bloatware isn't going anywhere, we could at least set a norm that real menus give you options, and anyone trying to prevent you from engaging your brain is pushing crapware.
After being burned by ATI and Nvidia driver installs gone bad I'm reluctant to use anything but the defaults. Software SDKs too seem especially brittle when installed apart from the recommended options.
So it's not only casual users who prefer the default.
I would probably make an exception for driver installs, which definitely get into that "actually broken" territory. I'm also ok with "detect my device settings automatically".
I was mostly thinking about the program installs where 'advanced' is a list of different things you might want to install. The worst case there is pretty much just "you didn't install the thing you wanted", versus the "good luck buddy" worst case of tools and drivers.
Seems to me that you're describing the exact sort of psychological tactic that software vendors favor to get people to install the bloatware in the first place.
Please (re)read Norman's Design of Everyday Things, for starters. TLDR: Blame the designer (of the tool), not the user.
I taught 100s of people how to use CAD. Tough going. Technophile me thought "Of course it's better, easier, more powerful." Most of my student's blamed themselves. But I knew they were smart, knew what they wanted to accomplish, and just couldn't figure out how to do their jobs with CAD.
You could say "square peg, round hole", or "like pushing rope". I preferred "CAD is an angry 800lb gorilla sitting between people and their work".
Being kinda thick, I finally figured out the tools suck, not the people. Specifically with CAD, worse than the terrible UI, they had terrible mental models, completely divorced from how the users see their own work.
In conclusion, blaming the user is not very humanist. I rage at computers all the time, and I supposedly know what I'm doing. I have deep empathy for anyone not chin-deep in this crap.
That's definitely the right approach most of the time. But sometimes people are just beyond any sort of help. When a literate, educated user cannot even read a message presented on the screen (and I don't mean understand or act upon the message, I merely mean read it), or when they don't understand that their electrically operated computer will not function until the building's electricity is turned back on, I think it's fair to blame the user.
Sometimes people really do have the necessary knowledge and skills, they just refuse to apply them to computers. Certainly, many computer are too quick to blame the user when it's really a design problem, but it's not always wrong.
I sometimes wonder if this is the result of people falling into one of two categories of how they learned to see the world, deeply or shallowly. I think of myself as looking at the world deeply. I'm often less interested in the immediate response to my action than I am as to why that was the response. I want to understand the cause of things. I've met plenty of people that don't care about this at all, they care that action X yields response Y, and (for the most part, barring specialties) go through life with that as their model of the world. Sometimes I envy these people, as their heuristic approach often yields better projections than I'm able to come up with in certain areas (such as how a person may respond to a complex situation).
Unfortunately, I think the shallow mental model has problems when presented with large systems of behavior as a whole that are foreign. Similar to the average person when dropped into a foreign culture with a different language, someone with a shallow mental model of the world may find a computer, with it's decades of layered systems and skeuomorphic metaphors, if they are even attempting that in the specific instance, may be initially too daunting to attempt on their own. At least that's my pop-psychology theory of the day.
I think there's a correlation here, but I'm not sure that someone's weltanschauung would be causative of digital literacy.
Rather, I think the divide is "I can" vs. "I can't", and the latter limited thinking leads to the sort of limited world view you describe.
It's something that fascinates me - so many lead lives that are essentially proscribed, view reality through a pre-fabricated toy lens - and when confronted with an individual that has chosen something other than default options in life react vigorously and vociferously. I see the same in computer literacy and literacy in general - the "I can't" stemming from a refusal to accept new information which may conflict with preconceived notions.
If anything, I think it's fear of thought for the potential unhappiness that knowledge brings - adding something to the picture can make the overall scene so large and intimidating that many shy away rather than attempting the daunting task of filling the gaps.
I mean, it is daunting - once one has climbed a mountain of knowledge you can see the whole, the foothills, how it joins in coherence - but at the bottom, all you can see is an almighty sheer face. I have learned to be comfortable with the knowledge that I know nothing, despite knowing far more than my fellow man. Knowledge requires humility, as you swiftly learn that you couldn't be further from the centre of the universe or less significant.
I think there's a point early in life when this bifurcation happens - perhaps it's down to whether parents attempt to answer a child's questions about the world or simply reply with "stop asking stupid questions". Perhaps it's down to egoism - you either have to be humble or unspeakably arrrogant to pursue knowledge, and the middle is intimidated.
> When a literate, educated user cannot even read a message presented on the screen (and I don't mean understand or act upon the message, I merely mean read it)...
This is usually trained into users by non-existent or awful error handling and messages in the majority of software. How the industry addresses errors is haphazard at best, and I see lots of evidence of what I call "checkbox/boilerplate project management" in lots of software products. The classic symptom is an error throws a message along the lines of "Foo Error: there was a foo type of error"; it is an error message, so I guess it got marked as a successful part of somebody's sprint by its mere existence, and not evaluated on its utility.
The industry should be hiring legions of editors with red markers to up our communications game when it comes to error messages.
To an extent, yes. I can understand why users will ignore error messages when they're in the middle of doing something else. I can even understand why they might ignore them when things are going wrong. But where it crosses the line for me is when an expert is helping them and asks them, "Can you read me the error message?" And they plead computer-ignorance and refuse. (Yes, this really does happen.)
I've seen people that can't understand the interface to an elevator, that the up light means it's going up and the down light means it's going down. That's for a single purpose tool that people use everyday. How do you propose we design intuitive UI's for those people?
And aren't CAD tools power tools? They are designed to be easy to use for the experienced, not easy to pick up. There is a trade off between power and ease of use.
It's like a helpless excuse to be lazy and let someone else do the work
In my experience more often then not the 'not even trying' has nothing to do with wanting someone else to do it, but rather with being scared to mess up exactly because they feel they are not knowledgeable. That's pretty instinctive I think (e.g. the first time I used a chainsaw I was also rather hesitant as I basically knew sh*t about it) but sometimes also driven by the abdunance of horror stories heard about 'losing all mail/photos/....' (which is, given the person's skills, usually actually very hard to do for them).
I agree. I felt that with maths many times - it was fear caused by lack of understanding. "I don't know what it is, I don't know how to bite it, I don't feel the problem, help!".
I have that with software too, sometimes - again, due to lack of understanding. For most of my Java experience, Maven POM files for me were "that fucking pile of shitty XML I'm not touching with a ten foot pole, because it doesn't make sense". I didn't know why it looks the way it does, what any change will do, how to make anything work. I eventually bit the bullet and spent a day reading up on the docs and now I'm not afraid anymore. But it took me some years to first stop avoiding the topic altogether.
> 'losing all mail/photos/....' (which is, given the person's skills, usually actually very hard to do for them).
The thing is, they don't know that. They have no intuition for the range of operations and capabilities of a piece of software in front of them. You and I both know that this program they're using probably won't be able to find user's mail or photos by itself, much less delete anything - nor that anything in its operations comes close to deleting anything. But regular people don't know that, and I think the common factor here is not having an intuition about the limits of space you're moving in.
XML obfuscation layer, semi-declarative runtime that can't be debugged (no breakpoints), weird rules, unpredictable failure modes, arcane config & installation, etc.
Maven is utterly broken and irredeemable. That you rose to the occasion to master it is a testimony to your adaptability and pluck.
I'm not proud of my adaptability; I'm ashamed it took me so long to bite the bullet and read the docs.
I'm actually working on turning it into an instinctual habit - instead of trying to power through the unknown, take a break and read a goddamn book about it. It makes you less productive for few hours or days, but then quickly pays significant dividends. Sharpening the axe, and all that.
I brought up the story to highlight the fact that it was so easy, so natural for me to avoid dealing with Maven altogether for years. If, as a technical person, this is my default reaction to a confusing piece of technology, then how can I expect more from normal people? Conversely, their fears could probably be alleviated with as little as few hours worth of education or at least a good explanation. The trick is to make normal people willing to invest those few hours, when their own curiosity isn't enough.
Yeah i see people happily surf all kinds of sites, but once they need to deal with local files they lock up i fear of making an irrecoverable mistake.
The benefit of youth is that one do not have prior memories of failure and punishment that thus allows one to mess up. One of the first things i did after getting a computer was to accidentally blank the hdd. But after a friend helped me reconstruct the environment i learned how to do same and more. And these days i am the ad-hoc support line for family and friends.
I wonder if the whole GUI is not really helping here. Sure, it allows things like browsers to exist etc. But at the same time all the locations of files and such are mental rather than physical.
back in the day, if i wanted to run a spreadsheet program i had to locate the floppy it was stored on. Now it is stored in one of the many sub-folders that blanket my 1TB+ sized hdd.
A whole lot of problems with home computers seems to stem from them doing a number of things in the background without the user being aware.
I can't help wonder what would come to be if one were to go back to the C64 or similar where if we wanted it do anything we had to start it ourselves.
BTW, there is an old OSNews article[0] from someone teaching basic computer usage to older people. And he claims that he had a fair bit more success with the CLI than the GUI. This because not only was it clear when the computer was doing something (one could not input more commands) but it also gave a history of previous actions, and the ability to set aside tasks for later (with a warning about that on exit).
I can't help wonder if the whole desktop metaphor and WIMP, while making things more "friendly" at first glance (vs that blinking prompt) in the end has just added a whole lot of conceptual thicket to cut through when trying to translate goals to actions.
All in all i wonder what we would get if we had a cli with some gui niceties, like being able to add file names to a command by clicking them in the history of a recent ls/dir command.
> This because not only was it clear when the computer was doing something (one could not input more commands)
Maybe old GUIs were better. You had N windows on screen, meaning N tasks being done. Now you also have to track minimized windows. And tray icons. And programs that run in the background without showing any window or icon at all. And system services, which are different kind of background programs. And I'm probably forgetting something.
Right now, in Windows, I believe the Task Manager is a fundamental tool to understand the runtime state of your system, and it should be advertised as a "normal-user tool", not "expert tool". And I'm talking here about detailed process view[0], and not the dumbed-down view from Windows 8/10[1]. Users can't really hurt themselves with Task Manager anyway (you don't get to kill a system process that easily), and it's fundamental to getting a feel for what your computer does.
Tangentially, the "dumbed-down view" of Task Manager is an example of what I believe is the biggest sin of modern UIs - lying to people. There's hardly a better way to confuse the shit out of users' minds than presenting a limited and inconsistent view of the internal state.
> the ability to set aside tasks for later (with a warning about that on exit).
I wonder how it is this feature was made - both in GUI and CLI - in such a way almost nobody (except sysadmins) use it? I mean, personally I discovered that Windows can do it only very late in my life, and I don't trust UNIX cron. Most people I know - including programmers - don't know about either. Yet delaying tasks seems like useful feature. How come it's not exposed in OSes (and it seems to not exist on mobile at all, from user's POV)?
Sorry, i was not talking about cron or similar. But simply ctrl+z. Keep in mind that while on the surface that is similar to having multiple windows open, the process is suspended and out of the way. Now the user can actively tell the process to resume in the background, or just leave it there until later. And if one was to attempt to log out while having such a process waiting, there will be a warning.
All in all, this is perhaps a closer approximation of a physical workflow than the desktop metaphor.
"Task Manager is a fundamental tool to understand the runtime state"
IIRC, the taskbar and tray are meant to communicate that at all times. But trays were easily overrun as people installed more and more background software on a single PC. Modern taskbars too have become so crowded there are groupings and virtual taskbars for the virtual desktops.
One cause is software unnecessarily running at all times. Another is people do more with their PCs than in times past.
> But the person saying that is often not even trying, which is frustrating.
This reminds me of all the anecdotes I've heard about people who immediately click through all alert boxes, to the point of being nigh-incapable of passing on an error message unless someone is standing over their shoulder forcing them to read it.
Oh god my mother. What she needed to do was written in plain language right in front of her, and it was nothing technical but something simple like "you should fill out that text field". I had to install an antivirus software that would not nag her about installing the for-pay version, because no matter what I said, she would inevitably end up with the for-pay version and its useless features installed (but only the trial version, she's smart enough not to actually pay). My mother is pretty intelligent and far from senile, just around 65, it wasn't that long ago that she was a busy executive in a major corporation who had to manage not only people but also her Excel sheets and MS Office at the very least. I still remember many many years ago when she first got a computer with a mouse. She moved the mouse carefully between the tips of only two fingers, and taking her hand off of it between movements to stop and look...
The first time my mother used a mouse was in 1989. She called me on the phone and said "the mouse is not working".
- What do you mean it's not working, does it move on the screen or not?
- Well, it moves but just a little.
- ??? Tell me more?!?
- When I do big movements it moves a little, but when I do small movements it doesn't move at all. I can't get it to cross the screen, even with momentum.
It took me a while to understand she was moving the mouse in the air; in those times mice had a ball that would indeed register movement if you shook the object hard enough.
This was fun and all, but in the end my mother learned to use computers and is now very good at it.
It doesn't help that 'the Windows trick', for the UI that is, became popups. Popups after popups that were almost all useless things or worse annoying ads that got popped up on some page you were on.
There is a LOT of noise to that signal. Contrasted to more traditional (teletype inspired) interfaces that only reported actual warnings (if asked for) and failures unless you were extremely specific.
This UI hostility also burns people who aren't experienced with hotkeys. An awful lot of malware-serving webpages pop up duplicate header bars as part of an image, in hopes of catching people who click on the wrong X (much like ads that consist of fake download links).
That's a non-event for a an experienced user, who (if ad block didn't disable the page) probably resorts to the close-tab hotkey by sheer habit. But it's an appreciable threat to someone who's trying to rely on predictable screen UI for their browsing.
Weirdest thing i have seen was some ad delivered malware that would pop up a full screen image of an XP desktop, complete with some popup claiming to be some error or other.
The whole thing was clickable and it fooled me the first time (i really should have picked up on the pointer being a hand rather than an arrow), never mind my less knowledgeable relatives.
That's a new one on me, too. I'm not sure I would have caught it.
One useful trick is right-clicking first, which shouldn't trigger the link and will be very different on an outbound link than a blank webpage or desktop.
Of course, mobile is where I get the most trouble from this - I don't have any hotkeys, and real page-loading is so bouncy and unpredictable that it's particularly easy to get caught by a fake top bar. It's made even worse by things that scroll away the top bar as you move down page, and then bring back a fake one.
In deep pain, I have to regret I even know some IT people that don't read error messages. (I suspect partial problem is here the effort he has to read.)
Not reading error popups is the norm around me, and I work in IT support. It has nothing to do with the message, but the medium: a message popup disrupt the user's workflow, and more often than not for no good reason. All people who work with computers very quickly get in the habit of dismissing flow-breaking popups. I'd say the problem is not with the people.
It's like the ER doctors who miss crucial "Patient is dying!" alarms. They're not just lazy or careless, they work in an environment where every device is screaming "I am here!" every minute of the day.
It's both an important work skill and an unavoidable physiological response to tune out obnoxious and unimportant stimuli, to the point where the people abusing those signals ought to be accountable.
You had something similar that resulted in a mid-Atlantic plane crash some years ago. The crew got so blanketed by automated warnings, and didn't have any visual cues thanks to being a night flight, that they basically flew the plane into the ocean.
Pilots are repeatedly taught to never "silence" the annoying warning horns that come on for different reasons (slow speed, wrong landing gear configuration). The idea being that mindlessly dismissing alarms will build a bad habit pattern. Especially when you're actually in trouble.
I don't read them either it turns out, even though I would assume I'm the kind of person to do so. I particularly notice this when say in a video game you need to hit one button to exit and then it asks you to confirm. I can find myself going through the exit, cancel exit, exit again loop multiple times before my brain catches up to what's happening.
In case of IT, you also have to factor in that sometimes people dismiss error messages because they already know what they're telling. Like in my C++ days, I would often not bother looking at the error output at all, because the moment I saw that compiler is starting to spew out text, I immediately realized where I've made a typo.
it's the same shit. I constantly hear "yeah, I'm awful with computers, it's all black magic to me". But the person saying that is often not even trying, which is frustrating.
"For twenty years, my research has shown that the view you adopt for yourself profoundly affects the way you lead your life. It can determine whether you become the person you want to be and whether you accomplish the things you value" (https://www.brainpickings.org/2014/01/29/carol-dweck-mindset...).
There are several more pieces in that series, on that site, if you care to look.
The reality and strength of the Growth Mindset theory is fairly unclear, despite a lot of books pushing it as gospel. It's also unclear whether it's accurate even if it is effective - there's an open possibility that people who are basically delusional about their own potential practice harder than the realists, and a handful of them who had high starting potential get great results.
So... age-related degeneration is real, neuroplasticity drops over time, but I wouldn't want to connect that to growth mindset because I'm not convinced that's even a real effect.
I think the incentives to having a growth mindset are diminished. Once people are comfortable with what they have in life - job, spouse, children, what benefit does learning, say, poetry, or programming, have to their daily life? I feel like at a certain point you have to feel like it enhances your life beyond allowing you to acquire the standard metrics of success that we adopt into our cultural consciousness.
It's a very large and fuzzy topic. It depends on what computer system you run. Being able to use windows is not a skill, it's extremely unconceptual and ad-hoc, and mostly coll rote learning. I'm sure older systems, with less layers and cruft (think 70s box) taught you more, giving you simpler tools and more ways to solve problems on your own. Nowadays it's mostly fads.
You're right about self deprecation. Society lifts computer usage as a value even if it's mostly absurd in reality (because I knew excel, people would look at me doing basic tasks saying they couldn't do it). It affects many people, it's very very hard to reverse. It's indeed very similar to math anxiety. Reminds me of childhood friends who dropped out of school in Junior High, couldn't do the "simplest ratio" but were doing them on the fly with custom units to sell weed, I couldn't follow them. It's all ceremony, sadly.
ps: one last thing, I wonder how illiterate people would feel if given a system like NixOS or GuixSD with almost fully rollbackable system; meaning they can mess with it without negative emotions or almost no superstition.
Even in programming schools I and others felt that we should have old machines to program most of the time. It makes you think differently, or just more.
When I push this trail of thought I sometimes believe mainframe programming style slow turnaround: plan, reflect, write, wait for test, correct; was great for this.
Sure exploratory programming, which I used to love, has value, but lots of the development in past days I think came from the technological wonder of "can we do it this way ? cool".
In the post-mainframe golden age, before UNIX and C came to screw everything up, people were using highly interactive programming environments with REPLs, system-wide debuggers, ability to work directly on running processes, etc. I think having access to such tools does not diminish your ability to think (I'm judging by my experience with those tools which somewhat survived in the Lisp world). But the issue with distracting OS seems more real. Back in the 70s, they didn't have a task bar with IM, music player and web browser waiting for you there. It was you and your dev environment.
not just computers. whenever i visit home and my mom or sister want to watch netflix i have to be the one to plug in the hdmi cord for them from the ps4. i've showed them how to do it dozens of times but they somehow zone it out and would rather bother me and wait then figure it out.
Then again, HDMI is laden with HDCP, and i seem to recall more than one story about how you had to turn on devices in just the right order to get something other than a black screen or error because HDCP tripped up.
Digital signals are in theory a boon. but digital signals wrapped in DRM is a hot mess best avoided.
Reminds me of my cats. One of them learned to open doors by jumping on the door handle. The other didn't, and when he needed some door open, he would wait for the first one to come and help.
You: "Seriously? It's about 5% more complicated than plugging in the vacuum cleaner. I'm in the middle of something. One last time and that's it." Voila!
I couldn't agree more with your last paragraph - I see this all the time. If these people approached other skills like driving a car or using a washing machine the same way they approach computers, they'd walk around in dirty clothes.
As you mentioned it's easy to dismiss computer wizardry as black magic. This is the lazy approach in my opinion. This is simply a question of attitude towards problem solving.
I analyzed technology proficiency level vs adult literacy level. Listing the equivalent literacy level with relatively equal % of population. ie, level 3 proficiency is 5% of population; above a college reading level is 5% of adult population:
The literacy idea is a good one. I was mystified by the description of the tasks being basically an IQ test or cognition test or math and logic test tangentially involving a computer as a topic and shockingly the result of predictable IQ test results being that computer user interfaces suck, which is pretty hilarious.
"I failed my standardized tests math section because the user interface of a #2 graphite pencil is poor." No, its because you're not good at math, the problem isn't the pencil. Half the population has to be below the median, after all, and we design civilization to make sure evolution doesn't cull anyone, so ...
I am kinda old and I can assure you that plenty of people were completely incompetent at using a physical card catalog (kind of like a printed out google search form, where an old fashioned library used to kinda be a printed out internet, back before libraries became internet cafes without coffee and also turned into daycares). Inability to search for things intelligently is not a recent invention of computation, any more than the inability to write essays is a result of word processing software or century old typewriter or the ballpoint pen.
Its truism thru history that as soon as the first tool was invented, some dude without enough mental horsepower to use it immediately started attacking the tool as being poorly designed. Fire, wheels, carpentry tools, farm machinery, computers, same old story. Some folks can't think, and giving them a tool then asking them to perform a thoughtful task is doomed to fail and result in the tool catching most of the blame. Why if only wheels were better designed to be round in all three dimensions instead of just two then that pesky problem of axle placement wouldn't be so stressful for end users, etc etc.
The fact of the matter is the median human is barely at a level of boolean algebra basics and arithmetic and following instructions on a packaged cake mix box. Giving someone innumerate a calculator doesn't mean they can now launch the space shuttle, any more than giving someone an anvil means they can shoe horses or handing me a basketball means I automatically have a career in the NBA.
No. Median is a value of a specimen that at least half of the population is
not smaller than it and at least half is not greater than it. In typical
cases of large(-ish) populations with a range of values the notion of
"precisely half of the population is smaller" is just a mental model good
enough to generally understand what the median is about.
Now imagine a degenerated case of a population with the same score. Surely
there is a median, but surely none of the specimen is smaller than the
median. A little less trivial case is a population of five with one value
below, one value above, and three values in the middle.
This is not adding up... 32% of US population has Bachelor's degree but only 5% has college level reading skills? I think original survey probably has some fatal flaws. May be they should re-publish results after only taking in to account individuals of age 25 years or more who actually respond.
The disconnect is that the definition of "college-level reading skills" is normative, not empirical. It refers to what a college-educated person "should" be able to read, based on the materials they see in coursework, not on what an actual college graduate can in practice read. Similarly, you'll often see stats showing that the average 10th-grader in so-and-so slice of the student population reads at a 3rd-grade level - meaning that their reading skills are really only up to school materials taught in 3rd grade.
The phrase "college-level reading skills" has always been well-defined as a prescribed threshold of literacy rather than describe any college student with any level of reading.
If we used your suggested 2nd definition of "real students", the label would be circular and meaningless. (E.g. prerequisites reworded in a bizarre way such as, "Remedial Reading courses support students who do not demonstrate non-idealized reading skill proficiency that real students have."[1],[2])
The alternative definition "any college student with any level of reading" certainly strikes me as a straw man (granted, unintentional one). How about "average reading level of a college student"? Or "average needed to graduate college"? (I'm not saying that is the definition, but I think it's closer to what laymen assume "reading level" refers to.)
> How about "average reading level of a college student"?
This is a form of normed referencing, and ultimately it becomes a (sometimes rapidly) moving target that makes comparisons difficult.
The standard used in the field is criterion referencing (similar to "prescriptive" used by 'jasode). This allows for easier apples-to-apples comparisons.
You are correct that they layman understanding is closer to your definition (at least in my experience). The layman understanding that you describe is not the standard in the field.
That seems reasonable - anecdotally I know many graduates from both sides of the pond who have never read a book they didn't have to - for instance, in the UK, fifty shades was broadly heralded as the first book many had read in their adult lives.
From within our intellectual elite bubble it's easy to go "surely it ain't so", but there it is. Hell, my ex business partner who was a highly educated and intelligent man doesn't own a single book and hasn't read one since "animal farm" at school.
The sad reality is that the vast majority of people have absolutely no interest in reading. It's not a question of access, rather perceived effort.
I think school is partly responsible for this. I had to read many terrible books in school. French is my mother tongue and I had to read stuff written in ancient French where a third of the page is definition so you can understand. Most of the time the book is just uninteresting. In my 5 years of high school, there are 3-4 books out of the 20+ that I consider worth reading. And the worst thing for me is the Analysis that come out of nowhere. I actually liked to read when I was a kid but now when I see a book I just have French class flashbacks and a feeling of disgust. People told me it goes away after a while. I hope it's true.
Anecdata: I know a lot of well educated people who don't read. At all.
What reading level would you expect for someone who got a degree in their early twenties then didn't read anything more complicated than People magazine for decades after.
Level 1 or 2. They may have never been functional at level 3 (not that there is anything wrong with that).
Level 3 means that someone can consistently read and comprehend the full meaning of college-level texts. In the US, one does not need to read at this level to graduate from college (or even many grad schools).
There are many people in the world who are smart and/or very knowledgeable who don't consistently read and comprehend at level 3 across the range of topics covered in a typical college curriculum.
Source: Me. I have trained people in both reading literacy and tech literacy. My experience is both theoretical and practical. Feel free to ask follow-up questions if you have any.
Even most programmers are distributed like this. My team recently was doing usability tests of a networking library that I work on. We screen recorded a few random programmers try to install it and then make a simple app with it, giving them access to our tutorial and Hello World demo.
It was painful to watch them stumble about, trying to debug installation errors that seem obvious to us. Just like trying to watch a lay person try to use a computer, watching these other people made me want to blurt out the answer.
Any of the lessons that you take away from designing simpler UI for lay people applies just as much to professionals. Write a better API, a better library, and better documentation.
Case in point: while Java Swing where popular, noobs, including me where looking to Graphics2D and Image when we first wanted to insert a picture.
The correct solution was jLabel.
Since then I have learned a lot but man pages and docs without examples are still annoying. (BTW: used rsync manually for the first time in a while yesterday and the man page was good, src and dst syntax was described. : )
The problem there being that few people are going to see JLabel in a class list and think "oh, that must be something to do with my image-related needs!" Especially if they've used other GUI tookits, which as far as my experience goes use "label" exclusively to mean "some static text".
Well, except WPF, where you can re-template anything you like, but even there, the default Label is just a text container.
Or web developers, who know that the obvious way to insert an image is to take a block element originally used for text and style it with a background image...
The problem there being that few people are going to see JLabel in a class list and think "oh, that must be something to do with my image-related needs!" Especially if they've used other GUI tookits, which as far as my experience goes use "label" exclusively to mean "some static text".
In Tcl/Tk you would also use a label to display your image with. Tk was one of the most popular toolkits at the time Swing was designed.
Being blunt, it's also possible that your library was garbage from a usability perspective. It's nearly impossible to recognize how painful it is to work with your own code since you are so intimately involved in writing it.
That's most likely true. Most libraries are horrible to install and configure and only make sense to the people who wrote it.
I gave up on learning PHP and Laravel recently when I went to install Laravel and they recommend Homestead. So I went to install Homestead, but they require Vagrant. Of course Vagrant requires VirtualBox, and VB can't find my kernel source code on RHEL 6.5. Okay, so let's skip Vagrant... Okay, now I need to install Composer instead of Homestead... what's this? SSL operation failed... failed to enable crypto... operation failed in command line code...
Yeah I give up. Yeah, I could take the time to work through this nonsense, but I shouldn't have to. Getting multiple layers deep of having to install dependency A to satisfy dependency B which satisfies dependency C is not my idea of fun.
> It was painful to watch them stumble about, trying to debug installation errors that seem obvious to us. Just like trying to watch a lay person try to use a computer, watching these other people made me want to blurt out the answer.
I see that all the time. Programmers aren't necessarily expert computer users.
The number of people on my team that don't use, say, WebStorm's regular expression search & replace to quickly refactor, or find usages, or anything in the refactor menu, or even the NPM scripts I made to quickly update the project -- any of the less obvious features to make their own lives easier is more than half of the team.
And they also like to work on one monitor, maybe two.
Just watching them try to debug something with their cargo cult methodology is frustrating.
Shit, I have people on my team who click Edit -> Copy and Edit -> Paste. Sometimes they might right click and paste, but keyboard shortcuts are foreign to them.
Isn't it common knowledge though, that the setup is a really big chunk of the work?
And the problems encountered are probably not all too exotic, so you'll be much better after doing it a few times. However this is not a task you do often (at most once or twice per project). It's no surprise to me that years of practice work wonders on reducing the setup time.
If you add research of suitable tools it becomes probably even more dramatic (finding, installing, testing, uninstalling, repeat until you think you might have the best or a suitable solution)
> It was painful to watch them stumble about, trying to debug installation errors that seem obvious to us
At least they were trying and did not dismissed the error messages as soon as they appeared
Links like this illustrate what a distorted bubble the typical HNer lives in.
I've lost track of how many times I've seen a product or company posted here and HNers will say "I can already do that myself." For instance, I still chuckle at the "you can already build such a system yourself quite trivially" comment in the Dropbox HN post nearly a decade ago.
Even within HN there are bubbles inside the bubbles.
I keep seeing _sec people make statements that only makes sense if one is the administrator/dictator of a company network or server farm, and that are basically impossible to implement on a personal computer without effectively treating the user as an attacker.
They should have included a category 'below 0': "User attempts to complete task by using the interface provided to him. Gives up in disgust after 15 minutes. Spends next 8 hours to (in this order) use common automation tools to reach his goals, write small scripts, reverse engineer the data and network formats, and disassemble the executable to inject his own hooks. After still failing to accomplish task, user tries to gradually replace parts of the provided software with his own versions; first using quick glue languages, eventually having to resort to rewrote large core parts in C just to have enough control of all components to make them interoperate. After 4 weeks of caffeine-fueled 16 hour days, user declares war on the vendor and starts writing his own version of the software and ecosystem to free humanity of the plague that is this god-forsaken piece of crapware. After 8 months of battling 30 years of legacy interoperability, data exchange formats and interfaces and the 8th question from Suzy in Admin about where the button that made the one thing with the blue border go away and then made the green other thing go bleep bleep has gone, user flees to off-grid cabin in the mountains with a beard below his collarbone and a crazed look in his eyes, while muttering 'they drew first blood. They drew first blood.' The end."
(bonus upvote for the first person to recognize the movie reference)
It's [OC] so thank you - and I'm not ashamed to admit that during the first few hours after I posted it, it was up/downvoted a bit and it even got to -1 for some time; and that at that point I cried a tiny little tear inside, filled with essence of the programmer's existentialist anxiety...
I would like to think about the set of skills that don't revolve entirely around coincidental interfaces made by a few mega corporations.
Are there computer skills more general than the ability to use iMessage or Windows Live Mail? Because if using iMessage means you now know how to chat, then what happens when NewChat(tm) comes out with a new brand experience?
What happens when Microsoft goes Metro or Windows 10?
As a society, should we be paying for user training for specific familiarity with proprietary interfaces? Shouldn't Microsoft or Apple be paying for this stuff? This is why I'm generally very suspicious of what people mean by "computer skills" in schools. They generally mean user training for Apple, Microsoft, or Google.
The generalized skill is the ability to recognize and categorize metaphors in the interface. Once you've used a few different systems you start to gain an intuition that is more generally useful. But most users don't use a lot of different interfaces. They have experience with just 1. I think that's the primary difference between people who work in Tech and UX and the general user. One group has developed an intuitive sense for metaphors and the other group has memorized at most a couple set of metaphors.
To me computer skills means two things: rate at which one can learn new technologies, and one's understanding the basic components that comprise of the machines.
Programming fits both of those categories: consider the number of programming languages in vogue at the moment. These languages however are composed of the same components which comprise the machine.
Programming languages are software, no differently than iMessage or Windows Live Mail. It's just another way to control the machine.
These are called interface standards and should follow the standards defined by the eco-system in which the application resides. This is why we adhere to them to ensure that the userbase has as much of a baseline familiarity as can be assumed.
As for the original question regarding the distribution of user's "computer skills"... if you can figure out how designers think about how you should use systems, then it's much less about having "computer skills" than having the capacity to figure out the psychology of the application designer - if you can type and use a mouse and can either Google or read a book describing the paradigms of an application or eco-system and figure out what they mean, you've got all the computer skills you need.
As programmers, I think we tend to forget what exactly it is we do... we don't just program, we figure shit out at the most technical levels. Other people are quite adequate at this too - we have lawyers, doctors, surgeons, mechanics and any number of other professions that solve way more complex problems than we do - we're not islands of all the world's intelligence. If people appear to lack "computer skills" it's because the paradigms we use to define our user interfaces and solve the problems we are attempting to solve are shit - and I say this as a pretty well seasoned computer programmer who spends most of my life trying to figure out how to get other people's shit user interfaces to get my own shit done.
So... that's really what I've got to say about that. People will figure out what they have time and inclination to figure out. If they don't have the time or inclination to figure it out, then it's your problem: Your software isn't solving big enough pain points for the learning curve involved. This means that either this user is not your target audience, or you don't understand your target audience well enough.
Either way, it's not on the end user, it's on you as the software designer.
I think there is a set of basic computer skills that can help one manage their personal finances and basic administration, safely maintain (backup) a collection of important digital documents (including personal photos, but also things like scans of diploma's and such), and author neat letters and a resume for formal correspondence or job hunting.
Regardless of their future career, people should probably be taught how to use a word processor (use styles, hierarchically outline your document, basic sensible typography) and spreadsheet software at school, along with basic internet skills (particularly with regards to finding reliable information and understanding the pros and cons of social media).
None of these skills require any programming knowledge.
>> I would like to think about the set of skills that don't revolve entirely around coincidental interfaces made by a few mega corporations.
> wouldn't that be knowing how to program?
Nearly all programming languages or instruction sets for assembly languages are also coincidental interfaces (this time for programming), often (though not always) at least originally designed by some "megacorp":
- Assembly languages: typically the respective processor vendor
- C: Bell Labs (AT&T)
- C#: Microsoft
- Go: Google
- Java: Sun (now Oracle)
- Javascript: Netscape
- Objective-C: NeXT (now Apple)
- R: A reimplementation of S (originally by Bell Labs; later commercially offered by TIBCO Software (another megacorp) under the name "S-PLUS").
- Swift: Apple
So what I would consider as more general computer skills is rather the mathematical or the electrical engineering side of computing.
"Nearly all programming languages or instruction sets for assembly languages are also coincidental interfaces (this time for programming), often (though not always) at least originally designed by some "megacorp":"
Nearly all the mainstream, massively successful ones, yes. But the vast bulk of languages are not, and there is little evidence to be provided that there is some sort of massive advantage to be gained by "normal users" for any of them. There are better ones than those for some purposes, yes, but after many thousands of languages it's not like there's one that is clearly better for non-programmers, and this after at least several dozen and probably several hundred attempts explicitly aimed at that purpose.
Asking for people to understand the math is an even larger ask than asking them to understand the programming, which I base on the fact you can still find a very significant percentage of professional programmers, possibly even the majority, who have disdain for the mathematical elements of computing.
This is a not-very-generous reading of "most", because while I read that as "most programming languages (weighed by how often they are used)" you can also read that as "most programming languages (weighed by counting them)" if you want to prove some point.
In the context of discussing whether or not there is a more non-programmer friendly programming language, the fact that there are dozens or hundreds of attempts at that specific problem, which have largely failed, is relevant. The programming world is not defined by the top 10ish commercial languages; they're merely "very important".
It is also not an unfriendly reading when my very first words (which I have not edited) acknowledge the alternative readings.
I might just be confused, but I'm having a hard time understanding what point you're trying to make. I've read your comments and the context a couple times now and now I'm only sure that you're reading something in other comments that I'm simply not reading.
Yes, there are programming languages aimed at non-programmers. Many of these are also "commercial" languages, or at least were commercial in origin (like C). Smalltalk (Xerox), Hypertalk (Apple), the efforts towards 5GLs in the 1980s, Visual Basic (Microsoft), etc. We agree that these are relevant, I'm just not sure what point you're trying to make with that relevance, and I'd love it if you could elaborate.
>, I'm just not sure what point you're trying to make with that relevance,
I'll make an attempt to explain why jerf's response looks out of place.
Basically, wolfgke and jerf looked at the 2 groupings of programming languages (mainstream megacorps vs unknown) as evidence for 2 different goals.
wolfgke: generalization path -- dominance of megacorps languages means you must keep going one level lower in hierarchy of computer science concepts to learn universal concepts instead of proprietary syntax.
jerf: pedagogy ease of use -- megacorps languages are not provably any harder to learn than specialized toy languages
How did those two end up talking about 2 different things?!?
If you look at the comment chain from posters threatofrain-->bryanrasmussen-->wolfgke ... they started a dialogue that keeps diving lower and lower in underlying principles. It's a variation of the XKCD comic about "purity".[1]
jerf's response doesn't continue that purity dissection. Instead, his emphasis on pedagogy seems to point back to the original article by Jakob Nielsen which prompted this thread. That article says that elite users (like HN readers) can use complicated computer software and we forget that most others can't. The communication breakdown was assuming that wolfgke listed the megacorps language as a (ease-of-use) response to J Nielsen instead of a specific (purity) reply to bryanrasmussen.
But then again, I might have misunderstood everybody and I have no idea what people were trying to say.
> wolfgke: generalization path -- dominance of megacorps languages means you must keep going one level lower in hierarchy of computer science concepts to learn universal concepts instead of proprietary syntax.
Correct, with the additional (IMHO important) fact that threatofrain criticized "coincidental interfaces made by a few mega corporations" and looked for more general skills, bryanrasmussen meant that "knowing how to program" is such a skill, but I analyzed that most popular programming languages are also just "coincidental interfaces made by a few mega corporations" (this time for programming instead of the general user), so that we have nothing won concerning the original problem of "coincidental interfaces made by a few mega corporations" - we are just some layers deeper. So I suggested that if you look for more general skills, you probably have to look even deeper into the mathematical or the electrical engineering side of computing.
I would have liked to see screenshots of the UIs they used, but they seem to be absent from the OECD report.
A lot of the standard tasks we do with office/business software are unintuitive, but easily learned once someone shows you or you Google it. Even as a designer or developer, it's not always clear what each button does.
>participants were asked to perform 14 computer-based tasks. Instead of using live websites, the participants attempted the tasks on simulated software on the test facilitator’s computer. This allowed the researchers to make sure that all participants were confronted with the same level of difficulty across the years and enabled controlled translations of the user interfaces into each country’s local language.
Same with the language used in software: Sometimes tranlating UI buttons actually makes usability worse, because now you have to learn the shared language all over again. The descriptions on UI elements are often useless, unless you already know what the buttons do.
I've been telling my SO to "google it" for years. She won't do it unless I ask her to. Then when she does, I do this -> 0_o because, really, why are you typing that? Do you not know how to google?
Anyway, all this misses the point. If you have to google how to use a GUI the software has failed at a fundamental level.
Only 2 out of 6 questions had anything to do with UI/UX - the rest were either reading comprehension and/or basic math.
From the 2 that dealt with UI/UX, both were extremely hard, in one of them I had to send an email, get a token and navigate the site to send another email.
If this is what the results are based on, I wouldn't base any UI/UX decisions for web products on the results of this test.
The simulated environment also makes it more difficult. I accidentally tried to go back in the real browser instead of the simulated browser and broke the test. The simulated environment is also very slow.
yeah, one of them told me to highlight a passage of text in some dense report about education effects. WTF does that have to do with "computer literacy"?!
It's very concerning to me that they didn't include screenshots of this simulation site in their report. The pages and pages of data are meaningless without knowing what they were actually measuring.
One of the questions for the test in English is “when did you first come to the United States”, but I have never been to the United States...
Incidentally, while I got every question right, I could easily imagine myself getting one or two of them wrong. Not out of stupidity, just bad luck. The question asking about the impact of educational attainment had at least 2 sentences that plausibly answered it, yet the correct answer was to highlight just 1 of those sentences.
I see the argument of "not enough hours of practice" in some comments. I disagree. That would be true if there was a direct correlation between moderate proficiency in computers (e.g.: ability to independently reinstall your machine completely and diagnose which hardware component to replace after a failure).
The unmentioned problem here is that people are actively reluctant (this is tested, too) to learn the skills that would let them be more independent. I doubt this is entirely related to the total amount of hours spent at a computer.
In some way, computer literacy seems to be a similar scenario than cars: the homo simplex doesn't want to understand how it works, he/she just wants it to work while enjoying the luxury of not needing to understand it while posting duck faces on Instagram.
Let's face it, computer illiterates understand less, pay more and end up having lower purchasing power. Considering that this is entirely under their control, I find this quite fair (I have different opinions about Seniors, but that's another discussion).
To be honest, I can find only one valid reason for this to be an issue: civil liberties and human rights.
I'm intimately and increasingly concerned by how much civil liberties have been taken from citizens in "developed" countries in the recent years, and I am under the impression that this is the direct effect of computer illiteracy spread across all levels of the population.
Citizens are being asked to vote on laws relying on technological concepts they don't understand, and which end up in striping them from their rights.
That's what computer illiteracy is about and that's what (I think) the article should talk about... :(
my father is a professor and has been asking my mom the same questions about microsoft word for the past 20 years, never learning it. he spends a lot of time on it too.
For exmple word processors, I really hate them with all my heart, but when getting a degree, I had to write much stuff. So I sat down and learned the core functions of Libre Office Writer and it made my life better.
Even my profs. were amazed how good my papers looked. Not because they were really good, but because none of my fellow computer science students bothered to learn the software and make them look good.
(Yeah, alright, I have written a scientific paper in LibreOffice as well. But it was one that was started by someone else, I just finished it. And I didn't inhale.)
Wow, there's a lot to unpack there. We went from not being able to schedule a meeting between four people in a calendar app, to lamenting the lack of computer repair skills, to denigrating people as somehow less than human ("homo simplex") because they have better things to do with their lives than to devote them to your hobby, to the rise mass surveillance.
There's no shared thread among any of these things, except they all contain a silicon wafer somewhere. They also all contain plastic, but I don't any one lamenting the lack of knowledge about polymer chemistry among the populace.
The calendar problem isn't necessarily a "computer skills" problem, as much as it could be simply a lack familiarity (solved by time), UX problem (making it literally not the user's problem). You want to know a secret? I don't know how to use GMail. Seriously. I don't.[0] I had to get someone to come over and show me how to simply reply to an email, because every time I clicked what looked like reply arrow, the damn thing went back to the list. I have no idea how to sort the inbox, even if you can. Perhaps, the most popular email client in the world. I don't know how to use it. Am I illiterate? Sure looks it. Yet, I can send an email using telnet.
Personally, given that people find word problems complicated, can't calculate restaurant tips in their head, and are prone to fall into personal bias supporting arguments, I think the failure to schedule a meeting is more of a general problem, than a computer one.
You're elevating "computer literacy" as some sort of giant lift in the economic chain. Really? Sure software engineering pays well, but it's also completely unrelated to sending out a calendar invite. If that was the metric, then why aren't secretaries paid $100k? Similarly, you can swap out software engineering, with any other high earning field, say hedge fund managing. Arguably, that's a better field, and you don't have to know how a floating point numbers are represented, nor do you have to send your own calendar invites.
Finally, yes, mass surveillance is a real issue, and yes more education about what is being done, and what is possible, and at what scale these things are being might help. It might not as well. These are much more political, legal, and philosophical questions than computing questions. After all, I don't need to understand how carbon and iron bond to understand that ethics of stabbing someone in the throat with a knife.
[0] Before anyone chimes in. I don't really care how use GMail. It sucks.
I've had the same issue with gmail. The back and reply buttons are nearly identical, the reply one is just a bit curvier. I don't know why google has a reputation for good UI's, most of the ones I've used have varied from decent to awful.
Who told you Google had a reputation for good UIs? Every Google UI is atrocious. Gmail uses UI concepts that were tried and rejected by Microsoft back in 2000. (Drop-down menus that obscure less-used options under expandos.)
YouTube's ContentID UI deserves a special mention, it's not just awful, it's actively sabotaging people questioning ContentID strikes (90% of which are fraud.)
Or try the new Google Hangouts desktop "app". (Not really an app any longer-- now only the Chrome add-in is available.) It's hilariously awful. It's almost as if they're trying to sabotage their own product. Don't take my work for it, read literally any of the reviews on their own app site:
This reminds me of a UX lecture I attended in undergrad in the UK.
The lecturer flashes up a picture and says "who is this?". There is a sea of blank faces as a reply. The picture was of Roy Chubby Brown [1] who by some measures was the most popular comedian in Britain at the time. Out of nearly 100 people no one had any idea.
This would have been surprising until a week ago when a friend of mine sent me this image of course offerings for those signing up for unemployment: https://imgur.com/gallery/tCG6J
Side note: If anyone is looking for an experienced technical sales person in Boston let me know! They have experience with startups, IPOs, large corporations, channel sales, and has the sales gift in a non sleazy way.
The same group did some research into that [1][2]. TLDR: there are differences in behaviour but error levels are similar and it makes no difference for the UI guidelines you should apply when designing.
All kids have been experts on computers for about three generations now. Its no longer a valid meme.
There's kids now that don't know what VCR was, but you can assure them that their Grandpa was the only person in the household who could figure out how to program the timer.
The idea that kids or young people are better with computers is not actually true. Yes they can use apps, some of them can install them, but for the most part they are clueless when it comes to actually fixing a problem with a computer.
The clear message of the article is about increasing awareness of users' limitations, but I'm also wondering what can be done to increase people's computer literacy. When we see statistics about functional illiteracy in the traditional print sense, we might think about ways of minimizing demands for people to read things, but we might also wonder how we can help improve literacy.
So, believing in the analogy between print literacy and computer literacy, I want to ask that question here too. What helps or doesn't help, and what can change? How early would different interventions have to start to be helpful?
In my kids UK primary school ICT (Info & Comp. Tech) amounts to playing games, not even computing focused ones just crappy flash games.
It's like teaching literacy by giving them only picture books. They'd be able to handle books, turn the pages, etc., just not actually use them properly.
With games like lightbot, apps like turtles and Scratch, and sites like code.org I can't really see the excuse for this.
Your comment caught me off guard and made me laugh out loud.
There were so many things about those early systems which were not universal skills at all, but simply hardware limitations (or holdovers from previous hardware limitations - or just terrible design).
And yet, after watching my kid learning to use a computer, I do feel a deep nostalgia for the single-purpose simplicity of the command line I grew up with.
The clear message of the article is about increasing awareness of users' limitations, but I'm also wondering what can be done to increase people's computer literacy.
I think those are two sides of the same coin - if we understand the user better we can make software that's easier for them to use, which in turn makes the user more confident and better at using the software. The goal shouldn't be to take away the powerful tools and replace them with simplified tools, but to make the learning curve shallower so users can learn to use the software and eventually to the point where they can use complex features if they need to.
Agreed. But I think advanced users become comfortable with idiosyncratic aspects of computing that could be recognised as deficiencies, and improved. So as well as teaching, there's the matter of raising the standards of software quality.
An opinion stated to a community that (generally speaking) thinks that Git is a pretty good piece of software.
Snarky, I'm sorry, but you're 100% right. I grew up in a world where there was tons of optimism in software development. Programs like HyperCard, FileMaker, Microsoft Access, Visual Basic/WinForms, RealBasic, etc. promised to make (the most common type of) application development accessible to everybody with minimal training.
HyperCard's most successful developers didn't come from software development backgrounds-- they were artists, journalists, historians, etc.
All that's abandoned now. The optimism is gone. Developers who gave a crap seemed to have disappeared a decade ago. The open source development methodology, who never gave a crap about ease-of-use even when Apple and Microsoft used it as the foundation of their products, have taken over all software development.
And I sit at my cubicle, thinking about the now-dead world that was emerging back in the mid-90s, thinking: I could have been a plumber instead.
Maybe there's hope for a new generation to come to be educated with those old values. And it doesn't need to be an entire generation... just enough people in the right places.
I didn't grow up in that age. Thanks for the link!
There is a scene in Silicon Valley (the series) where Richard thinks the interface of the platform is really simple and intuitive, but turns out he only asked Level 3 people and above and it was not easy to use at the Level 1 people.
Well, in Turkey, because of frequent bans on websites, a good amount of younger people know what DNS is, what a VPN is, how do they work, which one is better than the other, etc. I can say that people on Level-1 use these tools to solve "problems" and people on level-2 can set these stuff up for them.
I didn't realise this until I heard that Hillary didn't know how to use a desktop computer and could only use a specific version of blackberry. Apparently it's still possible to be to totally functional without using technology.
I don't think that's true anymore. If anything we're getting past the point where not knowing how to use a computer is harmful. Smartphones and tablets have really lowered the barrier to entry for modern (high-tech) life.
Desktop operating systems are complex and hard and if you do the wrong thing, you can break everything. Mobile OSes don't give you those options.
"Smartphones and tablets have really lowered the barrier to entry for modern (high-tech) life."
This is true - however these devices have also completely taken away our computational freedom as well. In fact, I'd argue that computer literacy is more important than ever. The only thing that has changed is that now society suffers as a whole from computer illiteracy, instead of just the computer illiterate. They can still post cat photos on social media, but a growing segment of the population has no idea how any of this works, and is at the mercy of Microsoft, Google, Apple, Facebook, Zynga, etc.
As someone who grew up with computers and started making Doom maps when I was 8, and programming C++ on Linux when I was 14, I see this trend as an oncoming storm. Its as if we were replacing libraries with cable television. Computers (phones and tablets specifically) are transforming from a platform that enables us to do anything we want, to a sort of Disneyland marketplace where nothing exists unless there is money to be made, regardless of ethics.
I think its more important now than ever for people to understand how to identify and select legitimate software that doesn't have ulterior motives, and put together a system which serves no one but it's owner. I suppose this is exactly what TFA says is impossible, but damn is it important.
Most people I know in business management default to passing tasks off to other people (as I think they should). They end up like the opposite of the absent-minded professor: They are skilled in getting people to do what they want, but have no technical skills of their own. I'd imagine when you get to that level, it's even more concentrated.
HRC and DJT were born in 1946. They were college-educated adults with years of career experience before it was reasonable to expect someone to have computer experience, and before the popular GUI computers were in use the general public.
This is a great topic and an informative article. As a probable exception to the rule, I'm an "older" individual with decent computer skills. No doubt it's attributable to managing and programming computers out of necessity since the IBM PC days. Having to learn everything the hard way has its merits.
Related to the article I created and maintain a database application for an non-profit arts organization. It involves membership, artwork inventory and exhibitions. It's a fairly complex task, almost all the work goes into the web-based UI, while the PostgresQL backend/server is pretty straightforward.
The challenging part is getting the artists and volunteers to actually use the DB program. The UI is as simple and unambiguous as I can make it. There's only so much to do to make a record with two dozen fields to fill in "simple".
One method is using dropdown options to select from where that fits. Also avoiding hidden "tricks" the user would have to know, giving on-screen examples of proper field format (like date or time entries) and providing concise, specific, instructive error messages.
Even with all that effort, convincing users to try it out has proven the biggest hurdle to success. I've come to realize a key to the tool's utility is constant encouragement. Live demos are often a useful way to help people "get over the hump" of fear and resistance.
Ironically, the "power users" often show the greatest resistance to trying out the web app. While it's made to work as directly as possible, power users fear they will look "dumb" if they don't instantly grasp the operation of every feature.
What I've learned (again) is it requires empathy for users' fears and sense of intimidation, and assuring the app is designed with "foolproof" safeguards preventing accidental disasters. Above all it takes enormous patience training users, listening to user feedback, answering questions and never, ever criticizing when people make mistakes.
I admit I still catch myself accidentally hitting the browser's back button on a multi-page Ajax form without its own back button. Or having to fill out a form again because, instead of opening a modal or a new tab, the current page changes (e.g., TOS). I've come across countless UX developers who don't consider these subtle details.
And this is only a small part of nuances that a typical user faces. I don't care how beautiful or fancy your UX is. Familiarity is king in design, and if you stray too much away from the current experience, I won't hesitate to say that my toilet has better UX.
Not very surprising actually. They have good chunk of people who are heavily reliant on mobile devices and that predates smartphones. I have heard more than few cases where university professors commenting about student's lack of computer skill, to the extent that some of their student submit their essay using their phone.
This is fantastic news. I am glad to hear I am in the upper echelons and computing literacy - let alone programming literacy - has not caught on. My skillset would be useless if everyone could do it.
I found it funny that the researchers refrained from assigning a computer skills "level zero" to avoid the negative connotation. To experienced computer users zero is just the first number
Level 9: Builds rudimentary computer out of off-the-shelf parts, on a table covered in breadboard and DIP-switches, programs C compiler by hand on the DIP-switches with a hand-carved wooden stylus, and bootstrap-compiles the full-featured compiler and OS kernel for commercial computer hardware from painstakingly audited source before proceeding.
I read this piece for the first time about four years ago. I've been revisiting it at intervals ever since. As it happens, I had been assigned to a Javascript project at that time after a decade of working with Java. I had been initially dismissive of the "toy language" of Javascript but I also felt that as I learned it, I grew inexplicably fond of it. The Graham essay helped me put the puzzle pieces in their right places and realize that Java was really a pretty clunky language and that Javascript was, in fact, superior. Never went full LISP though -- I had tried Scheme in the past and failed to wrap my head around it -- but it definitely made me respect the ideas behind the language.
What is with the needlessly complex description of tasks?
> Tasks are based on well-defined problems involving the use of only one function within a generic interface to meet one explicit criterion without any categorical or inferential reasoning, or transforming of information. Few steps are required and no sub-goal has to be generated.
It reminds me of times when I was writing articles for journals and had to find filler to reach the required arbitrary page count.
While this report says it takes the 'average cross the OECD countries' it's not clear on which factors were weighted or adjusted.
The information isn't wrong - it's just worth taking it with a grain of salt depending on how you are using the information. That said the US market for example lines up very close to the average, which is still staggering.
I'm not surprised. Typically "computer skills" are taught as "click here, here and here to edit a word document". I think going back to basics and teaching people how to do things from the command line would improve literacy dramatically, especially with the younger generation who have been shielded from the concepts of "files" and "programs".
That's a matter of opinion that I am sure many Linux/unix users do not share. Even on windows the tree structure is easiest understood in plain text without all the noise. It's literally a diagram of the file system.
I grew up using command line, I kept stubbornly using it for years after GUIs was ubiquitous, and I still use it regularly at work and at home. None of that means that it is the best tool available for this specific learning task.
> It's literally a diagram of the file system.
So is Windows Explorer or any other GUI file browser, in every way. If you put it in details view, it even displays the same data in the same way as a command line directory, except you can navigate it intuitively with a mouse.
Explorer will show a lot of unrelated stuff on the left. It will hide important information (filenames). It makes it difficult to see hidden files. It will pollute the view with different icons for file extensions.
When someone is learning what a file is, explorer shows both too much and too little information.
Keyboards sure are hard to beat. Maybe a foldable blue-tooth keyboard that you can have in your pocket and then place on your lap or a table.
Where do you place your mobile while typing with both hands though ? Maybe in a casing so it can sit on your lower arm. Or a Velcro/burdock case so you can place the mobile on textile surfaces.
"Computing?" Now that's a term I haven't heard in a long time.
Who does computing outside of data center? I'm typing this on a laptop right now, but I wouldn't say I do any "computing" on it. Even at work, I would say that the actually amount of "computing" I'm responsible for (i.e. a program I specifically wrote to calculate some number.) is very low compared to simply text editing, web surfing, and music playing.
Throwing around this archaic and ill defined term, and then dismissing the phone as "a media consumption device" strikes me as elitist and out of touch. Is photography consumption? Clearly not, since it's production. Is media editing consumption? No. Is it computing? Well given that nonlinear digital video editors were big expensive machines several years ago, I'd say so, unless of course we're moving the bar. Is writing computing? 30 years ago it was. Do people actually write on their phones? Yes. Oh communication doesn't count? How about term papers? Do they do that? Yes, they do.[0]
Fair enough, it's an bad term. What I mean is instructing the computer to do something for them, not having to manually perform every action themselves. The sorts of things people do in excel but in a more automated way.
As for what phones are capable of, I'm aware. But they are the wrong tool for the job. For the most part the are walled gardens where the only things you can do are what some app developer thought you might like to do. The industry is stuck in a regressive state where we are trying to limit users, not empower them.
They're the wrong tool for the job of creating and managing files in a filesystem. They're not the wrong tool for writing a Google Doc, or updating a Facebook status, or taking a photo and uploading it to Instagram. Arguing that we need to teach users paradigms like a command line is out of touch with what computers are to most people today.
We should be making tools that work with the way people use computers, not making users work at learning the tools we build.
Yet they are many times more powerful then the computers we had in the eighties. The input look the same though, maybe it's time to invent some new IO, for example a wrist band so you can type in the air and a HUD like google glass.
Why should the typical computer user need to understand concepts like files and programs? That sounds akin to saying that before someone can drive a car they should need to learn how to strip down an engine. There probably are those who advocate that but they're in the minority
Computer is not a car. Car is an appliance, and perfect bullshit-free car would be one that simply drives you from point A to point B safely, without any additional input besides providing the point B.
Computer is not an appliance, it's a general-purpose problem-solving tool. It's closer to reading and writing than to dishwasher or fax machine. You can argue that applications are like appliances, and that makes some sense, but refusing to learn about the environment those applications live in makes you that much less able to do things that span multiple applications.
Because those are the primitives that make computers work. Interfaces change like fashion but something like "saving a file" is something that was useful 30 years ago and now.
Stripping down the engine would be more akin to coding the programs themselves. This is more akin to saying that before someone drives they should know about the accelerator, brake and steering wheel, the basic tools that they use to interact with the car.
To follow the analogy further, it's both. We've never had a machine as general purpose as computers are. I'd classify it as more like writing and mathematics than any piece of technology. And with those we teach from the bottom up.
The same reason you want literacy in any other field: to not be taken by the nose, taken advantage of.
Example medical care, in theory I should not know a thing about it. Yet not all doctors show the required dedication to figure out a problem to it's root cause. As such I need some literacy in this field, so I can solve, or insist/steer the problem to a solution.
I suspect, but can't say for sure, that every commoditized technology throughout history has gone through this process where first you can't use it without understanding it in depth, then people try to build "it just works" experiences with gradually decreasing pushback from the experts, and eventually very few people actually understand the thing.
> Why should the typical computer user need to understand concepts like files and programs? That sounds akin to saying that before someone can drive a car they should need to learn how to strip down an engine.
I'd say it sounds a lot more like learning the basic theory of hydraulic brakes (because you shouldn't brake too much but rather switch to a lower gear when you drive down from mountains.)
Some if this has nothing to do with computers or interfaces. A lot of people couldn't do the find info across emails and schedule meeting whether the inputs and ouputs were paper, people, whatever. The problem is they lack ability, in shirt, to problem solve. Something we excel at we can't imagine being hard for others. Being able to discern what the problem is, wgat are the assets, what's the done condition, how to divide up and order problem, what steps to take, what order to take them. How to focus. Curiosity, ability to explore, trial and error. Just being able to formulate a test, determine result of test, and fathom whether the result is applicable to your task.
I think the labels are quite harsh. Level 1 can be quite functioning for certain applications:
- below level 1: can receive and reply to electronic communication (once told how)
- level 1: can retrieve information from knowledge archives of all sorts. Can google alright.
- level 2: can organize archives/folders/mail, can organize communications in a group, can find information in less obvious places and google quite well
- level 3: has skills that go beyond what is currently needed in 90% of non IT-related jobs.
Stunning though is the ~20% that can't do the simplest task. I wonder if they are capable of using an ATM.
Neilsen Norman Group writes articles that convey important technical information in very easy to understand language. Reading their writing makes me feel smarter, and I think it's more than a feeling.
I'm not a web developer, I'm a web user but I have sometimes programmed computers. Quite a bit of human-machine interface, where mistakes have an extreme need to be avoided. For decades.
If I was going to do a website, I always thought I would want the kind of thing that would make Neilsen proud. For decades.
I think each of us computer experts should at least once in a while face a technical situation that completely befuddles us. Maybe this way we can get a sense of what it's like to be a normal "non-computer" person.
I'm not talking about the challenge of learning a new programming language or framework. We know that will have its difficult spots. I don't even mean chasing down the trickiest bug. That's what we were born and trained to do!
I mean something that you would think should be simple and obvious, but you're stuck, have no clue what to try next, and think the whole world must just be broken.
This just happened to me. Night before last, my car's ESC (Electronic Stability Control) light came on. I thought I must have bumped the ESC disable button, but tapping it a few times did nothing.
A little while later, the Check Engine light came on to keep the ESC light company. Uh-oh. I explained to my friend that it didn't mean we had to stop right now, unless that light started blinking. It didn't blink, and the engine sounded fine.
Isn't it funny how a car has this one all-purpose light that could mean just about anything, and the car knows which of those many things it actually is, but it won't tell you?
Unless you're a mechanic. Or unless you're smart, like me, and had an OBD-II device that comes with an app for your phone. Now I can just read the code myself!
So I opened the app, tapped Connect, and there it was: Motorola Roadster 2. Um, that's my speakerphone. No OBD-II?
I went to the website and read the manual for my OBD-II device and found my mistake: I needed to push the Bluetooth pairing button on the device before I could connect. So I pushed it, saw the blue light blinking, went back to the app and tapped Connect once more. No sign of the OBD-II device, just the speakerphone again!
Something was really messed up and I had no idea what it was. I'd connected to the device before without any problem, but I'd also done a factory reset on the phone since then, so whatever it was wasn't working any more. Rebooting the phone didn't help, and turning the car ignition off and on a few times didn't either. I gave up for the night (and the next day).
Finally today I got the courage to try again. First, of course, I tried all the things I tried before, thinking that maybe this time they would work.
No such luck. Finally I got the idea of going to the Bluetooth menu on the phone to see if I could find the OBD-II device that way.
You can guess the rest of the story: after I pushed the pairing button on the device, it showed up in the Bluetooth menu and I paired the phone with it like I'd done a long time ago before that factory reset. And then the OBD-II app connected and showed me the error:
P0504 Brake Switch "A"/"B" Correlation
I looked that up online and found the good news: indeed this was not an engine problem but a brake switch problem that also disables ESC. And the bad news: I'd been driving around with no brake lights and thought I had an engine problem instead.
It was only sheer luck that I happened to think of checking the Bluetooth menu. When the OBD-II program showed me the list of available devices (only the Motorola speakerphone), it didn't say anything about what to do if my OBD-II device didn't show up in the list. And when I found their manual online it didn't offer any clue either.
I was just supposed to "know" that I should go to the Bluetooth menu before trying to use the app.
I still tend to think of myself as some kind of computer expert (in fact many of you have used code I wrote), but I'm grateful for this experience - it taught me to have a bit more sympathy when people get so frustrated with their computers and devices.
Stuff like that happens all the time. Totally stumped by something because the path forward simply doesn't occur to me.
Computer technologists differ from 'everybody else' in that, they keep trying until they succeed. In other words, you don't lose until you give up. In fact as a consultant its really my job to struggle through issues to the conclusion. My critical value is that they can waste my time with stuff like that instead of their own employees.
I sometimes act like a "level 1" user when dealing with applications that I will only use once or twice.
Why spend hours trying to re-learn how to do something very basic and simple? I have better things to do with my time, even though I might be capable of using a complicated application.
I worked at Redfin building tools for real estate agents and my ex was at Expedia doing tools for call center employees. Most of them don't even know about multiple tabs. We are in our little bubble about tech competency.
> The main point I want to make is that you, dear reader, are almost certainly in the top category of computer skills, level 3. In the United States, only 5% of the population has these high computer skills. In Australia and the UK 6% are at this level; in Canada and across Northern Europe the number increases to 7%; Singapore and Japan are even better with a level-3 percentage of 8%.
> Overall, people with strong technology skills make up a 5–8% sliver of their country’s population, whatever rich country they may be coming from.
> You can do it; 92%–95% of the population can’t.
A lot of smart people price themselves too low because they're simply being kind. I'm one of these. It is worth remembering that, as a HN reader, self-selected for such content and community, you may also be too kind.
I hope in a decade, distribution shifts greatly to more computer literacy.. I get so tilted just thinking about things like... oh why browsers dont have regex search by default... it's because majority of people don't know regex.
I accidentally put a friend of mine off Wikipedia a few years ago by using Ctrl+F on it in front of her. She assumed it was a feature of the website, not of the browser, and thought it looked too complicated.
In Firefox it's called "Find", not "Search". I think this is a better name because it distinguishes it from web searches, which most people will think of when you say "search".
Why should the majority of people know regex? It sacrifices all clarity for extreme compactness, and the vast majority of users don't need its power. I'd love to see more powerful, versatile searches generally available--booleans, date ranges, so on and so forth--but regex is ridiculous overkill for most people's use cases.
I think this is one of the major stumbling blocks for computer literacy efforts--too many proponents define "literacy" not as "useful, applicable skills" but "the exact tools that I'm familiar with, and nothing else." Elsewhere in this thread there's people saying that command line skills are necessary to understand concepts like "files" and "programs".
> I'd love to see more powerful, versatile searches generally available--booleans, date ranges, so on and so forth--but regex is ridiculous overkill for most people's use cases.
To do that you're going to reinvent regex, usually with a clunkier interface that has to be re-implemented everywhere. Or you'll come up with a text based system like google, which is again it's own domain specific language.
Edit - in fact, I just remembered we have such a tool where I work. Because the support folks were deemed to be too stupid to learn regex we created a custom "language" that turned out to be a bastardized, dumbed down version of them. Instead of the support staff learning a skill for life, they only learn our abomination.
> Or you'll come up with a text based system like google, which is again it's own domain specific language.
I was thinking of Google's system specifically, yes.
I use regex a lot. It's extremely powerful, and when you really need it then nothing else will do. It's also phenomenally obtuse, it takes a real time investment to become fluent in it, and 95% of users will never need its power.
People aren't ignorant of regex because they're stupid or uneducated. They're ignorant of regex because they don't need it. They wouldn't use it if they had access to it, because they have better things to do with their time. They'll get more done if you give them a tool whose power-to-usability balance is actually suited to their needs.
Regarding your support staff, perhaps they did in fact need regex despite that decision, or perhaps your custom language was not the right choice for their needs. That doesn't mean that the typical user has any use for regex.
To make matters worse, most programmers don't really know regex well enough to use it without reference, and even if they think they do, they probably don't know the problem domain or the nature of their regex to get it bug-free. See basically all attempts at parsing email addresses in regex for an example of this.
Then you have the gotchas... E.g. most implementations have performance issues in many scenarios, leading to situations like when SO went down a few months ago.
Regex is actually a pretty good example. I think something like 99.9% of people get bitten by it. I'm using them for like 10 years and I rarely get the more complex ones working on the first try - not just because of the incosistencies between Perl, Js, .NET, sed, shell (extra escapes) and Python, but also stuff like look ahead/behinds or just plain logical errors which are pretty hard to spot
I'm not so sure the next generation is going to be significantly more computer literate. With the trend towards curated devices (read: smart phones and tablets) general purpose computer skills are being pushed back into the niche they used to occupy in decades past.
And certain areas in tech are harder to enter too. Web-dev a decade ago was easy to start, now every site you might be familiar with when starting out uses the RIDICULOUS_OVERKILL.js framework, SASS/LESS because CSS is for amateurs!, OBSCURE backend and THIS_WEEK's database. Plus npm and these 600 plugins to make development "easier".
Makes spaghetti php of bygone years look like lesson in modernism.
That doesn't necessarily follow. Cars are much more important now than in the 1940s. OTOH, "motor literacy"—in the sense of knowing how cars worked and being able to keep them running—might have been a lot more common back then.
That's an excellent insight, actually! A century ago, cars were fairly simple devices which required skill to start and operate. These days they are highly complex and hide their complexity nicely behind a dashboard. Once the abstraction leaks, however slightly, we call the pros. Or at least I do.
Computer technology follows the same trajectory because most people purchase technology for solving tasks, not because they have a craving for complexity. So "computer literacy" is about as important to society as "motor literacy".
Common sense is as important as ever, though. I.e. having a repository of background knowledge to be able to detect bullshit, fake news, and propaganda on the internet. That's the kind of old school literacy that we need.
It might be some kind of bias but I have always felt like when it comes to cars average Joe is at least capable of changing a tire, but when it comes to computers the very same person does not even seem to try.
Changing a tire takes some elbow grease, but the nature of the problem and the steps involved in the task are all pretty obvious. It might take 20 minutes or a half hour if the lugs are really tight, but you can grasp the whole problem in just a few seconds' thought.
Little if anything to do with computers manages to be so intuitive.
Young people today were brought up on tablets and web apps, they know less about how computers work than people bought up 20 years ago (assuming they had computer access).
But why should they need to? I don't know how my car works, but I drive. I don't know how my camera works, but I use it. I don't know how my TV works, but I use it.
You're just old, complaining about how times have changed.
That's an excellent example of how computer literacy could improve things. People manage their cameras and photo libraries with whatever godawful software comes with their camera or OEM laptop. When they get a new phone/laptop with different software they have to learn from scratch.
If the new what files and folders were they could just navigate to the directory and copy the files, maybe even do it from the cli that they could then script, saving themselves some future effort. But instead of expecting them to learn something moderately difficult (ie, something we can teach 1st graders) we pander to them and make them learn something just as difficult several times over. It's not surprising people switch off and don't want to learn.
> I don't know how my car works
I think you've vastly underestimating how much you know about cars and how much of that operating knowledge you need every single time you drive it. Everything from requiring petrol to what happens when you push the brake pedal to the road rules. In fact, I bet you could even tell me some road rules from places you've never visited.
You might not know the details of how an internal combustion engine works, but I'm not suggesting most people should learn the inner details of how a cpu or hard disk works, just to learn how to drive their machine.
I know, in theory, how to build and service an internal combustion engine. I just don't want to pry my arse out of my comfy chair and get my hands dirty. Also, I don't own the necessary tools, because I don't find IC engines to be remunerative or fun.
So I have never done much more than swap a flat tire for a spare, as a field repair. Why then do I get so pissed off when my regular mechanic tells me that a certain repair can only be done at a dealership? I wasn't going to do it myself anyway.
Because it means that I can't turn off the "check engine light", which sends my spouse spinning off into a fantasyland wherein the car violently explodes upon reaching 50 mph, or abruptly shuts down on a dark desert highway in the middle of a war between goblins and elves. Meanwhile, the "check engine light" in my daily driver has been on continuously for more than a year, because I know what the problem is, that it isn't serious, and that repairing it would be more expensive than just ignoring it until gas prices rise above $4/gal, or until it can be added on to another repair that would already require putting it up on a lift and dropping out the gas tank.
A computer does not have a "check engine light". It has an authentic-looking popup that says "Your computer is infected! Click here to download and install the Definitely-Not-a-Malware Toolbar to fix it. (And there's no need to bother your tech-savvy friend about this one....)"
Why would anyone need a bespoke command line program to manage photos in 2016? I'm really confused by what problem you think people have that they themselves don't think they have.
Pandering because they're not interested in your hobby? What type of arrogance is this? You could make this same argument about literally anything.
Look. At the start your lamenting that people don't write their own programs instead "godawful" OEM software, that they know how to use because they do use it. Then in the second part, you say that it's cool to not know how an internal combustion engine works, because I know how to drive. That's shifting the bar. They are as you say, "driving their machine" by using the "godawful" OEM software. They're solving their need. Why can't you just respect that?
Because that's what this whole article is discussing: computer skills.
Your examples reference appliances; cars drive, cameras take pictures, TVs play stuff. Computers support multiple tasks.
Unless you expect all computer based work in the future to be run though bespoke single function apps that have one giant button, people are going to keep needing these skills.
I suggest you look at your phone, because giant button single use apps is exactly what we have.
Taking a photo? Tape the picture of the camera, and then tap the giant button. Sending a message? Tap the speech bubble, type out message, and hit the big send button. And so on, and so forth.
Yes, but this isn't work.
Work is more discrete, less specific; I very much doubt we could have such simple apps that could support the employment of ~1/3 of the population (those in the article with -1 or 0 "computer skills")
Work is where complex tasks expose themselves; tasks that sit between theses apps. If you don't have the knowledge or how-to to move between the buttons in the increasingly appified future, you're going to struggle.
A driver needs to know several things about how their car works - like use of oil/coolant, what the clutch does, what brakes they have, etc. - in order to drive safely.
I know many good drivers what oil/coolant or clutch is and heck I even don't know my brake levels being a DIY car enthusiast. They just call for service if something shows up on the dashboard or hear something.
That does not mean they are ignorant drivers as long as they know how to drive safely and make regular services according to manufacturer's guide.
My car doesn't have a clutch. It's has an automatic transmission. i don't know what type of oil or coolent it needs, or even where to put it. I don't care. The light comes on saying it needs routine maintence, then I drive it down to the Genius Bar and give them my credit card. Problem solved. My brakes make the car stop. That's all I need to know. I don't know how a fuel injection system works either. I don't need to know any of this stuff. I get in. I press the button. I put in in D, and I go. That's pretty much all I need to know, and that's all I care to know. I get. I benefit from any additional knowledge. Could I look up where the drain plug is and what type of oil I need? Sure, but I do t want to.
Yeah. I'm proud that I don't know anything about my car, because you know what? I don't have to. That's how technology advances. You call it ignorance. I call it abstraction.
If you don't know if your brakes are ABS, or what happens when ABS kicks in, or how to drive with/without ABS then IMO you don't know one of the very elemental basics about driving safely; that's what I was (poorly) communicating.
With a new car, you can probably get away with not checking the oil or knowing how to top it up (ditto the coolant); both of which are part of the UK driving test IIRC. But having the engine seize because you didn't add oil or the car over-heated can be very dangerous. All the vehicles I've owned have had manuals that describe oil/coolant/tyre checks and test them as essentials for drivers, maybe that's a UK thing - like how toasters have a leaflet telling you not to poke metal objects in them, keyless say not to run them empty, etc..
Fuel injection I don't think pertains to safe driving?
>You call it ignorance. I call it abstraction. //
Ignorance and abstraction are different. Choosing to be ignorant of the workings of your vehicle is not an abstraction per se. Like knowing how to stack Lego isn't an abstraction of the, chemical manufacturing process or physical theories involved.
I used to think computer literacy was obviously increasing over time, because that was the narrative about the future that I heard (computers are becoming more important, people are getting more exposure to them → clearly people will understand them better and better), but now I'm not so sure!
Given that narrative, it should be increasing in the sense of getting more people off the floor and on to the bottom rung.
There's no reason to expect that it should be increasing in the sense of a higher percentage of people reaching the higher rungs.
It's a literacy vs. fluency thing. Improvements in global communication and travel should mean that more of us can speak a few foreign languages in the sense that we can ask directions to the station and buy a beer, but it doesn't necessarily mean that significantly more of us would be able to read and discuss the details of the major works of literature in those languages.
Age does not determine literacy. This gap will persist through at least the next 20, 30 years, if not longer.
I'm a senior at a private high school that's ranked top 3 in the nation. If I went up to someone and asked them what a regex was, nine times out of ten they wouldn't be able to tell me. Almost all people (who are otherwise are some of the most brilliant people I have met/will ever meet) wouldn't be able to reinstall Windows/macOS if their boot sector was corrupted.
Regex is interesting because almost anyone that works with documents and spreadsheets would benefit from it.
Similar to programming though, even as a non technical person at a tech company you would definitely encounter situations where some simple code/db queries would help a great deal.
Ha, I would already consider it progress if the Preview app on OS X wouldn't just ignore any non-alphabetic characters I put in the search field of a PDF.
Serious question: why does anyone still listens to this hack?
The majority of his results are either questionable or completely obvious, and he's clearly not very good at accounting for his own biases. Some of his work has set back the field of design quite a bit - like the nonsense about users not reading on the internet that has transformed itself into an unkillable monster.
p.s.: if you're coming to say "but dude, users really don't read on the internet" then reflect on the irony of doing that in a discussion forum.
I'm not sure that you read on the internet. The statistics cited in the article support his points in a fairly well-constructed way. Can you give an example of some specific works of his that you find fault with?
We have this innate comfort and familiarity with using computers, but as hackers it's a huge part of our lives. People have other ways of life than us, and have other expertise. We shouldn't dismiss their computer illiteracy as stupidity; it's just as bad as them writing us off as "computer nerds".
There's also this tendency for non-computer-literate people to be overly self-deprecating. I noticed this with mathematics, when I tutored people @ uni, but with computers it's the same shit. I constantly hear "yeah, I'm awful with computers, it's all black magic to me". But the person saying that is often not even trying, which is frustrating. It's like a helpless excuse to be lazy and let someone else do the work, which is often not that complicated.