Years ago a DBA I summoned to a project told me that my DB design was not bad, given that I'm a developer. Then he made it twice as fast by redesigning tables, putting in triggers, etc. (it wasn't a matter of ORM.)
Sadly DBAs are not even on the radar of small to medium projects. If they are ever hired it's too late to take full advantage of their work. The naive DB design of developers is too naive. Somebody could say that a DBA would be a premature optimization, IMHO the reality is that the tooling we are using (especially ORMs and their migrations) are designed to create vanilla DBs and vanilla queries (which are fine in all low volume applications.)
An example from a distant past: I remember when foreign keys were taboo among Rails developers because Rails didn't have a way to express them in its DSL. I added them with manual SQL queries in migrations. I was told they were useless but I kept them in the DB (lately I discovered that many developers don't know SQL and didn't attend to any DB course at university.) Fast forward to a few days ago: a developer of a Django project asked me if we had indices in the database, because he's been told that they could speed up queries. I explained him that Django adds indices in the obvious cases (primary keys and foreign keys), plus I added some on the fields we often use in our queries.
You shouldn’t need a DBA for a small or medium project. Any developer who hasn’t taken the time to learn basic RDMS theory like normalization, indexing, foreign keys, etc. doesn’t take thier craft seriously. Knowing the basics of how databases work is a required skill in almost any implementation.
The issue is “framework developers”. I am not opposed to learning and using frameworks but you should always know at least one level below the framework you are using.
If you are a web developer who uses Angular/React and Bootstrap everyday you should know JavaScript, CSS, and HTML well.
If you use an ORM, you should know enough about how databases work to have a mental model of what the database is doing.
> you should always know at least one level below the framework you are using
I've never heard that good advice put so succinctly before. Has this advice been around for ages and I just missed it, or is it a newly relevant concern?
BTW, what would two levels be to, say, a React developer, out of curiosity? Would it be the browser engines? Knowing how JIT compilers work, etc.? Or would it instead be a deeper understanding of the language spec, so that you know more internal functioning? Not that there has to be a clear-cut answer.
I heard it on a podcast and I am going to butcher the explanation but they explained it as knowing the “abstract machine”.
When I started learning AppleSoft Basic back in the 80s, I knew how to optimize my BASIC programs because I knew how the interpreter worked and so knew assembly.
When I started learning C#, I knew both C, Win APIs and how the CLR worked.
But for a web developer who communicates with backend servers, I think it’s important to understand HTTP.
I've heard a similar idea on Hanselminutes. The way he states it is that everything is layers of abstractions, and in order to be great at whatever your chosen layer is, you should be pretty good at the layer below it and of passable quality (think junior dev level) at the layer below that.
I would think one level down from React is the DOM API, and one level down from that is the rendering engine (i.e. knowing about how DOM operations affect repaints, whether repaints block the main thread, etc.
Language-wise, I'd say babel "javascript" is the highest level of abstraction. One level below is understanding what things like JSX and class properties transpile to. One level below is understanding JIT strategies (e.g. when hidden classes deopt), memory usage of various patterns, etc. One level below is understanding specific semantics of high level APIs (e.g. exact accuracy of a high res timer implementation), or regexp memory footprint by looking at browser source code.
For a react developer I'd say one level is understanding reactive design and maybe how you would implement it in pure JavaScript without react. One level below that would be how you would implement JavaScript and maybe a bit of knowledge how the engine works and shadow Dom.
If you're running into performance problems, knowing those things isn't enough. You need to know about query plans, different complexities of each kind of join algo, etc as well as how to get the query plan, how to profile your database query, etc. Basically databases are hard and you probably will need more than basic theory for anything that is more than a low-traffic CRUD app with latency SLAs on the order of hundreds of milliseconds or more.
This isn’t rocket science either. The theory behind database optimization is basically the same across every RDMS I’ve ever used (SQL server, Postgres, MySQL, and to a lesser extent Redshift). The second thing I learned about databases is how to look at a query plan and how to optimize it.
Why wouldn’t a developer’s job be to learn how to optimize a database query but would be to learn how to optimize a program?
I didn't claim it was rocket science nor that developers shouldn't learn these skills; only that optimizing a database is out of bounds of what most would consider "basic relational database theory". But beyond that, lots of developers aren't interfacing with a relational database. In a past life, I worked on embedded electronics. Other people work on operating systems. Others specialize in compilers or graphics or etc. Databases are a good skill to have, but it doesn't mean everyone should train like a web app dev either.
>Any developer who hasn’t taken the time to learn basic RDMS theory like normalization, indexing, foreign keys, etc. doesn’t take thier craft seriously.
Yet I've never once been asked about this in an interview despite using it maybe once a week for years.
>you should always know at least one level below the framework you are using
I run into this problem whenever I'm interviewing for devs on my team. As a java shop, we usually get people who come in and claim to be proficient in Spring, but when I start asking them questions about the underlying technologies they blank.
My favorite weed-out question when I'm worried someone is too tied to a framework (especially when its one that isn't actively in use on a project) is to ask them to give me some examples of the limitations of it. To me, being able to recognize the things your favorite framework is not able to solve effectively is the best indicator that you aren't using it as a crutch.
I can’t think of any limitations of ASP.Net MVC with all of its extensibility and I have a fairly good understanding of both the frameworks and the underlying technology (HTTP). I’m not sure I could answer that question.
Now on the other end AWS CloudFormation for infrastructure as code, I could go on for days about its limitations.
1) It's a web framework. So, not suitable for command line tools, desktop GUIs or persistent message queue workers. But you can do all of that in other kinds of C# .NET app.
2) However, .NET is a Garbage Collected language and should not your first choice for e.g. a real-time device driver or OS kernel.
Mathematically, you're not wrong. You can draw a line around the set of things that any particular software platform is good for, and the area outside of it will be infinite.
But people do try to do some of those things that I listed, using ASP.NET.
I was thinking you meant in the context of creating a web application but yes. You should never use ASP.Net to run any long running process even if it is kicked off via a REST call. You should use out of process web workers or some other type of queuing mechanism.
As someone who has had DBA in my job title before, it saddens me to think DBA was needed to explain the importance of foreign keys or indexes (this should be general developer knowledge at this point). As a general rule, any field used in a join or where clause is an index candidate. And foreign keys (along with other constraints) are like type checking in code. They help you not do anything stupid later.
lately I discovered that many developers don't know SQL and didn't attend to any DB course at university
My DB course taught us relational algebra and calculus and you were expected to learn SQL independently.
I am constantly bemused tho’ at the extraordinary lengths some devs will go to to avoid SQL. I’ve known people who will grudgingly do SELECT * FROM table; then extract the one row/column they want on the client. Then they complain that the DB is slow when it’s loadavg is 0.01...
To be fair, SQL isn't exactly a friendly language. It's basically relational algebra and a grab bag of utilities for dealing with whatever technology was in-vogue at each iteration of the standard. Relational algebra + xml + json + etc. And even in spite of all of the extra tacked-on features, you can't push standard SQL very far before you need to depend on implementation-specific features. So yeah, everyone should learn SQL, but I can appreciate the reluctance to dive in.
I find SQL queries (select, insert, delete...) to be fairly friendly. It's kind of like semi-broken English where you fill in the blanks. My first programming language was Python, and SQL queries weren't a big deal, once I knew how to order the clauses.
Though, once you get into more advanced things like stored procedures and triggers, SQL becomes a little hostile.
It is like the developer's mind is somehow incompatible with SQL. SQL = relational algebra + xml + json? That is most certainly not going to be friendly. Xml and json are for NoSQL. Keep it that way and you will eventually see the light.
I'm not sure what your point is, but you may have missed mine. My point is that SQL has had a lot of things bolted onto its relational algebra roots. XML and JSON were two examples, but stored procedures, triggers, window functions, are others and the list is quite long (and growing!). The problem isn't so much that SQL can do a lot, but that there's no wholistic theory belying it all; it's all just bolted on. And you can't just stick to the relational algebra subset of SQL because that won't get you very far; the features that were bolted on solve real problems, but ideally we'll find a more powerful theory than relational algebra that will do a better job at wholistically solving the set of problems we're trying to make SQL solve.
There is a theory underlying it all: relational algebra, as you mentioned. The bolted on parts are just that. My point is discard those (XML and JSON in particular) and SQL is not at all unfriendly. It is in fact the friendliest way yet discovered to deal with data in large shared data banks. XML and JSON (as a model for how to store large amounts of data) are an absolute disaster by comparison. XML and JSON features exist in databases primarily to check a box on sales presentations. They aren't needed. If you are using them heavily, you are making your life more difficult than it needs to be. XML and JSON database features aren't completely useless. They are one of many tools you can use to get data out of XML and JSON and put it into normalized tables where it can be efficiently and elegantly stored, accessed, and otherwise dealt with in the future. The relational algebra subset of SQL (tables, rows, columns, primary keys, referential integrity constraints, check constraints, indexes, joins, where clauses, etc.) will get you very, very far.
My second point is that something about relational databases and SQL just doesn't sit well with (many) software developers. As a result, we have a lot of XML and JSON blobs stored in key/value databases, a lot of software that doesn't work very well, and we are still dealing with a lot of problems that Ted Codd solved 48 years ago.
I followed your point, but my point is that the relational algebra part of SQL isn’t sufficient to solve all or most of our info search/retrieval problems; you end up needing other things—analytics, window functions, polymorphism, etc. You can’t push vanilla relational algebra SQL very far before you need some extension.
Funnily, and contrary to the article, in my previous job QAs did SQL to validate the operation of the app, and usually better than the devs themselves.
Here's a nice counterpoint.. I was on a project where the chosen database was Vertica. I think everyone assumed the DBA had used columnar style databases in the past. So later we found out the DBA didn't know what a projection was. Projections are the REASON you choose a database like Vertica in an analytics project.
In the end many of the various contracting companies started to work together to work around the DBAs ineptitude and work on ways to speed up performance, including projections. We kept him in the loop, but it just felt like the only thing the DBA cared about was permissions and user accounts. In fact that's the only thing we kept calling him for was one of our accounts would suddenly not work. He was also extremely insistent against tools like Liquibase to keep all environments in sync and standardized. Instead our Dev and QA through prod would all have different tables and columns all over the place because the DBA WAS IN CHARGE OF THAT.
This was a large organization, and 99% inefficient. There IS a reason large companies are trying to run projects like startups. DBAs are often like pizza flippers, whereas developers don't like doing things multiple times.
Maybe your company hired a DBA with the wrong skillset for your environment. I don't expect every DBA to know every DB technology, for exactly the same reason I'm good with some sw development technologies and bad with others. Still, that DBA could have picked a company with a better match to his CVs.
But it's not DBAs as in database admins that are needed. Every project should start with the correct data architecture and for that you need an architect. There are endless number of projects that eventually failed because the data architecture did not scale. That's due to a programmer who designs the database according to the way that their preferred tool works best, which is usually bad for data.
For any project that requires a database, always start with the method that data will be reported. If a report takes a long time to be generated because of your design, then you need to fix that first.
My experience has been very different. We scaled a project with a naive developer approach on Postgresql and we brought in a DBA when we started getting scaling issues. The few optimizations took the DBA 3 days to setup and we were on our way.
I think the reason DBAs are not hired is because we don't need them as often and for as long as we do devs.
The problem with coming in later is that data models are hard to change once a system is built around them. Sure, a DBA might spend a few days optimizing what's there, but the structure of the model itself is something you'll have to live with for quite a long time. Getting it wrong at the start means pain every time you interact with the database.
This is a real problem, in my book. DBAs also have very specialized knowledge, and tend to be expensive, but they're at their most productive when they're doing schema cleanup and index optimization work that can probably only keep a good person busy for a person-month or two per year, tops. And they're also at their most productive when they are preventing problems rather than trying to clean up stains that have already been set by a mess of runtime dependencies.
And all the rest of the time, you've got a very smart person whose job is basically just to make sure the backups are running smoothly.
I'm not sure what the solution there is. Only the largest enterprises can really keep a good DBA busy with work that isn't just skull-splittingly boring. Maybe merge the DBA role with BI or data engineering?
warehousing, projections, dev ops, cluster management, tools for disaster recovery, chaos monkey.
There's _never_ enough time to do everything and I feel that claiming "there isn't enough work to do" is just not taking the subject seriously enough. For fucks sake, most of our applications are effectively just SqlServer value-adds.
> discovered that many developers don't know SQL and didn't attend to any DB course at university
My school's DB course taught you to build databases. You're expected to learn SQL on your own. And we all did. Not knowing SQL is an ongoing lifestyle choice.
To be fair, it's the combination of DevOps done wrong and Full Stack done wrong that will cause interminable chaos. Like "Agile," it's the given manager and team's understanding and method of implementing DevOps and Full Stack that are the issue.
The term "Full Stack Developer" is especially problematic because taken at it's literal meaning it suggests that a developer should be fully competent at ALL aspects of developing an application, which necessarily include deep understanding of system administration, database administration, tertiary dependency (caching apps like Redis, for example) administration, network infrastructure configuration and maintenance -- all of which are impossible for one person and we haven't even gotten to code yet! I think a different term needs to be coined which implies variability until defined at a given shop. I propose "Wide Area of Responsibility (AOR) Developer" which, for Acme Corp, might mean "Is competent in writing server-side REST endpoints AND managing development/production database schemas AND load balancing traffic to production web servers" whereas at XYZ LLC it might mean "Can develop UX designs in Sketch to customer specs and then code them with HTML, CSS, and a JavaScript framework along with writing the REST endpoints into which back-end developers will connect business logic from myriad systems based on the contract the 'Full Stack Devs' specified for the needs of the UI."
> "A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects."
I mean, I got 17 out of 21. Subhuman? I see nothing about having to do any of those well, either, so I may human, yet. It's that last one I may have some trouble with. I think I'd much rather sleep through it. I think.
Interesting. I had never considered full stack to reach into the operations space. I always figured it meant development all the way from the server code to the front-end UI, but it was still 100% confined to code and not the operational aspects of keeping the application running.
My thoughts on full-stack is like your thought. It's a developer position. Once you start throwing in tasks that include the word "administration" in the description then that's likely a different job for a different person.
I joke that I'm a "full full stack" developer. Writing JS (bare, or in a framework like React or Angular or whatever), backend (Python, Elixir, Go, Ruby, etc...), install the OS or design cloud infrastructure, attach GDB or ptrace to processes if they're misbehaving, use tcpdump to watch packet flows, program kernel drivers, put together a toolchain and JTAG for microcontrollers, and design schematics & board layouts (analog or digital, not great at RF... yet).
I've considered calling my title FullStack++ because I'm full stack + devops, but I don't think any hiring mgr will understand that at later roles..
.. or are we just redefining the stack? I suppose db + api + FE is already a lot to call a full stack, but adding devops (not to diminish it) wouldn't really change the definition of full stack here..
But unless every full stack dev also knows devops, no one will know that I know devops too from my title.
Open to suggestions but until then I'm a "full stack + devops"
This is a necessity in small organisations and a problem in large ones. A problem because they have to have all the keys, like a janitor; and they tend to end up being treated as either janitors or dictators by the rest of the company. Also tends to have a high "bus factor".
(I semi-jokingly have "Full Stack From the Transistors Upwards" written on my CV, because I have at one time or another in the course of employment done everything from chip design to web design; that doesn't mean I'd be happy to do all of them at once.)
The term "Full Stack Developer" is especially problematic because taken at it's literal meaning it suggests that a developer should be fully competent at ALL aspects of developing an application, which necessarily include deep understanding of system administration, database administration, tertiary dependency (caching apps like Redis, for example) administration, network infrastructure configuration and maintenance -- all of which are impossible for one person
Why is that impossible especially in the age of cloud? Infrastructure setup and configuration is just another API.
My knowledge of the full stack may not be wide, but it’s fairly deep. If you tell me I have to design and implement a full web application from scratch with no preexisting infrastructure, I can choose a pretty good set of technologies and best practices to get it done as a cloud hosted implementation all the way from net ops, dev ops (CI/CD), backend RDMS or NoSQL storage, a modern server side language, a web server, and a website.
The only part that would be wanting is the website implementation would be a simplistic Bootstrap UI without any modern front end frameworks. The best I could do right now would be either ASP.Net MVC or Jquery + handlebars. But not knowing a front end framework well enough is just because of lack of time to learn and that’s not what I’m concentrating on now.
As the size of the project grew, I would need to bring in specialists for the web (that’s not my forte) and devops (it would be a waste of my time to manage devops/netops) at scale.
You could consider it to be another API - like that provided by AWS. An API with hundreds to thousands of endpoints, and each endpoint can have several dozen endpoint-specific arguments. An API that evolves over time, that you need to keep on top of.
And an API which, like all forms of abstraction, falls apart at the seams. There are still networks, interfaces, operating systems, files, cron jobs, sockets, flow logs, ACLs, security holes, and so forth.
Do you, as a developer, want to spend all the time required to keep up with that API? To be the one expected to fill the gaps when the abstraction falls apart? Or do you want to hire someone to take care of that for you?
You don’t just consider it an API, it is an API. No matter how you are using AWS - via the console, cloud formation templates, the cli, the various SDKs, etc. It all boils down to an API call.
I don’t want to have to be the only one, but I like having both the knowledge and the access not to have to wait on someone else. I gave up a job as an architect with so much red tape I couldn’t get anything done at a large company to be a senior developer at a small company where from day one I was given admin access to AWS (yes I have the skill set and the certifications for what they are worth).
To me it's not about gatekeeping, it's about specialization. If you have a thousand lines of Javascript to write, troubleshoot, optimize, and deploy, you don't call the Ruby guy if you have a Javascript gal.
The same way you don't write, maintain, and extend a thousand lines of IAM policies and roles, network ACLs, and launch configs in CloudFormation templates when you can have a specialist do it for you.
That said, I can understand the desire to gatekeep as someone in an operations role. Our role (as a generalization) is about scalibility, reliability, and availability. Developers (as a generalization) tend to care more about velocity, blue sky projects, and minimizing effort for the outcome.
Maintaining a balance between those two mindsets requires a lot of skill.
I can see that. But when the developer needs to add a few more resources and configurations to AWS wouldn’t you rather have them already figure it out on a Dev account and give you a CF template to review than to hand you a list of resources they need?
The same with devops. I know the devops folks enjoy having sql upgrade scripts and build and deployment scripts (currently yml or CF templates) handed to them that have already been tested in a Dev environment.
Also, if the developer isn’t knowledgeable about scaleability for instance, no matter how many servers you add behind a load balancer, if they store session state in memory instead of using something like ElastiCache, there is nothing you can do operationally (well technically you can have sticky sessions but that’s gross).
Even if the developer doesn’t do every part of the stack, they still have to know how to.
> all of which are impossible for one person and we haven't even gotten to code yet!
impossible is a stretch. harder than it appears perhaps.
If one looks at the original unix devs, this was often the case -
modify the kernel to add a new system call so that your runtime system can do X to enable your application to do Y. A true 'sysadmin' in those days was not a pure-IT person, he was an expert of the systems. It is only with the rise of binary-only systems (e.g. SunOS) and IT commercialization that a 'sysadmin' crystallized into a separate pure-IT task distinct from 'developer'.
In my view, what is called 'DevOps' for the operationally focused, and 'FullStack' for the software focused, is really just a return to this mode of operation, which is what a true 'sysadmin' or 'hacker' already was since the dawn of time.
Yep, absolutely. What folks are calling devops (wrongly or rightly) to me is simply what I considered a senior level sysadmin. The guys who can basically fix anything needed in a pinch - but guys you probably don't want designing and building large scale applications.
It's been a very strange 10 years to see the title of "sysadmin" start to become "IT button pusher" and the actual skilled role turn to the title of devops.
As a DevOps contractor/consultant (depending on the gig), I don't agree with that. Maybe it's just the way I approach my business, but I am also the guy you want designing and building large scale applications. Well, maybe not you, but a decent chunk of my clients do. ;) One startup-unicorn client hired me to advise on DevOps practices and implement the baseline stuff for a new project and I ended up the de facto team lead, banging out JavaScript alongside Chef, just because...it's all the same. Sometimes I write code to spit out web pages, sometimes I write code to spit out EC2 instances or Azure instances. That's what a "devops engineer" is, to me.
As far as the move for the word "sysadmin"--no joke, I blame it on Windows and small- to medium-scale businesses. When for a long time the conception of a "system administrator" is "mouse-driven process-follower", a competent software developer who happens to develop systems that throw EC2 instances around just isn't going to want to be associated with that title.
I'm with you on this. One of the huge advantages of having worn all of those hats is that you have pretty good experience of having seen how badly some interactions can go. I can look at a system design and point out problems that specialists miss. And on the other side of it, I'm a swiss army knife. Need someone to figure out how to get your web app talking to the crusty old COBOL app? I've got ideas on how to pull that off too...
Yes, we are rare indeed. There's not much of a market for writing tools that
work with the OS, developers usually want to stay away from the OS-provided
mechanisms, like build and packaging systems, network configuration, event
logging, HA tools, configuration management, and others.
In my career I've seen maybe four people who can be called system programmers.
It's a pity, because there's still so much to build, and it takes a system
programmer to spot what ops tools to build, why, and to actually build them.
>developers usually want to stay away from the OS-provided mechanisms, like build and packaging systems, network configuration, event logging, HA tools, configuration management, and others.
That's somewhat ironic, since doing development and building software tools in such areas (and system software in general) can be very interesting work: including not just the programming, but also the conceptualization, the product requirement definition, the design, seeing it in deployment and iterating to make changes, add new wanted features, etc.
That has not been my main area of work, but I have some background in it, and have done some pieces here and there (small products or parts of others) and find it to be really enjoyable work. In fact, for many developers, it can be more fun than doing business domain-level development (LOB apps), or social media / consumer apps kind of stuff. I've done some work in all those 3 broad areas, hence the comment.
Let me pose a rhetorical question: Why have all the hot trends in software project process/management for the last 10 years been so risk-prone to being "done wrong"?
The author didn't get DevOps (so don't many companies). It's not about putting developers on-call, it's about taking operational concerns into account early in the design of a piece of software and acknowledging that it's going to do much better in the wild when the people building it are also accountable to making it easy to operate.
Blaming DevOps (or full-stack, for that matter) for a toxic company culture is just too easy.
Exactly so. I'm involved in a DevOps transformation at my company, and have been exposed to the topic over the last few years, including attending the Delivery of Things conference in 2016.
What I've learned is that:
1. DevOps is simple to explain
2. For some reason, everyone has their own idea on what it means, and most of them are wrong
Key points from my perspective:
1. DevOps is an evolution of Agile and so all agile practices are a subset of DevOps. This point obviates the need for any points past number 2 below.
2. It's a process which is supposed to get operational knowledge involved in the development process much earlier to build it right the first time
Its over aim is to combat the fact that you can be proficient in only so many things. As you become more specialized in one area, your time becoming proficient in other areas suffers. So a pure dev is unlikely to know much about QA or Hosting, but that domain knowledge would be valuable for them to have in their daily duties. So you try and implement a process by which they have regular access to where that domain knowledge lives, and can make use of it.
It's really just a way of trying to solve the "Jack of all trades, master of none" problem. Put another way, it shouldn't be shuffling more duties onto the dev to save money on headcounts, but a way of increasing the value (not volume) of a Dev's already high-value work.
The Paint Drip concept has been around for a while now, so I'm surprised people still reference "Jack of all trades, master of None" especially in the software dev industry.
Yep. I disagree with the premise that DevOps is only a result of "the multiple technical roles that engineers at start-ups were forced to play due to lack of resources" and is therefore unnecessary, even harmful in larger organisations.
So in larger organizations, people can and should specialise more, and not everyone should be "jack of all trades".
But there is an important lesson in DevOps: owning your software from creation through testing, deployment and observing how it runs in production cases, and being partly responsible for handling Operational issues in it has great benefit. It closes a feedback loop that will greatly benefit your software. It doesn't matter if your organisation is large, that learning from the truth of production is still available to you.
> it's about taking operational concerns into account early in the design of a piece of software and acknowledging that it's going to do much better in the wild when the people building it are also accountable to making it easy to operate.
I agree that it's important to be thinking about these future states and having those conversations from day 1. But, I also think that it's easy to abstract away all of the various pieces and dependencies of some application, because we're sold by cloud providers that this design is "reliable" and "robust" and more generally speaking "a better design." But, what if those things aren't actually needed at that point in an applications life? What problem is really being solved if these things don't present a problem? If the application has a few dozen daily users, why is every single piece of it an AWS service vs. a single AWS EC2 instance? It gets even worse than this, of course. I've seen instances where a team of developers had no idea their application was using ElastiCache and experienced application outages as a result of AWS maintenance windows.
Sometimes, I guess, it just strikes me as putting the cart before the horse. And specifically with AWS (but somehow less so with Heroku or DigitalOcean, in my head).
" If there's a particularly nasty issue that seems to require deep database knowledge, you don't have the liberty of saying "that's not my specialty," and handing it off to a DBA team to investigate. Due to constrained resources, you're forced to take on the role of DBA and fix the issue yourself."
Wow, I'm sooo glad we've come as far as we have. I recall a time where a DBA was literally in charge of creating schemas for a product. Just about everything this person is describing in the article rubs me completely the wrong way. I am all about everyone who can help to try helping. All hands on deck. This article is pretty backwards.
This article was written in 2014 but I can personally attest from my experience it is still just as relevant and true. As the owner / operator of a DevOps As a Service company I see all too often people trying to replace dedicated infrastructure and cloud architecture experts with "Someone who has more of a 70% developer 30% devops background". It's unfair to the person being hired with unrealistic expectations and typically damaging to the company in terms of lost time and productivity.
i know of a company that has defined a position where a single person designs and sets up some technical equipment for two totally different departments led by two different, competing bosses.
this job-holder is routinely overworked and faces extreme demands on their time, requiring on-site presence both early in the morning and late at night. the job-holder must mediate interdepartmental conflict (two bosses). and there's no path to promotion.
the company keeps losing people in this position. it keeps casting a wider net, bringing in people from further and further away.
this is a solid, successful company with a reputation as one of the very best places to work in the county. but it has never bothered to redefine the position into two or more separate jobs. it's just churn and burn.
i'm not sure what the moral is. managers exploit the labor pool until they can't any more. then they look for another labor pool. if that doesn't work they whine and complain that they "can't find workers" to fill their jobs.
The issue is that there are successful people who can all those at the same time. Unfortunately, it requires a lot of study and varied experience. A newcomer can become passable Java developer within realistic timescales, but will require years of experience to be able to contribute at an organization that expects him to perform all those roles.
I have 20 years of experience in IT, 7 as admin (ops), 13 in development. I still spend over half my time on purposeful learning and sometimes find mysef out of depth, catching breath. I don't envy noobs who will have little chance catching up.
Most of the time I hear devops it in reality is a person doing hopefully good job in one area and passable in another and relying heavily on others for everything else.
This assume a developer is not a DBA. Or as if you can't be a good one.
I start with FoxPro. So databases is always first for me... to the point that I "mockup" the app with the database schema first.
So, I'm saying that for some people the database -> app is a straight, natural path. DBAs in the past was more about control the access to a central BD in a MAINFRAME. Modern RDBMS are very easy to administer and optimize.
In fact, I think that learn Java, .NET or C++ is far more harder than learn how use a RDBMS in full...
I think this is essential, and it will progress a bit further. As infrastructure has become automated, and QC has been rethought in response. The developer has been able to acquire those roles for improved development time. However, as coding becomes easier, and more people learn to code. I think another role, the product manager, will acquire the developer role (and all the positions previously acquired underneath that). So ultimately the roles of analysts and product managers will be extremely tech-savvy people who understand their respective businesses extremely well.
Today, a business analyst has a small idea. She presents it, then gets approval. A product manager works with her on the details to integrate it into the product or operation, and a project manager inserts the idea into the development backlog and prioritizes it. In a few months a developer gets around to the task, spends a day building it, and deploys right away, and then the analyst can measure if it has value. In the future, the process could be, a business analyst has a small idea. She presents it, with a plan on how to integrate it, and get's approval. Then she spends a day or three building and deploying it and then measures if it has value.
We've been optimizing Development -> deployment, but in reality what the business cares about is Idea -> Deployment.
> However, as coding becomes easier, and more people learn to code.
The problem is coding WILL eventually become hard again. If things are developed too rapidly, some aspect of development is going to accumulate error, that's going to be a bottleneck eventually when it comes time to automate whatever architectural layer is being 'left behind' as permanent/vital/intrinsic to system functioning.
It's easy to lose the forest for the trees when so many 'radically novel' tools keep popping up as 'solutions'. There are bugs in all of those things and they will accumulate eventually to the point that the 'ease' of development comes to a screeching halt. The bugs may only wind up surfacing in 'ideas', like the idea that one can forever hop skip jump from idea to deployment. Somewhere, something is always being sacrificed as an expense of getting something straight to market. (Soapbox: The sanity of developers gets relegated as an innocuous and trivial side effect. Just throw more money at it, right?)
99% of the 'DevOps' stuff that come across my job search is operations that deal with Azure/Google/AWS management, along with the the rainbow CI/CD tools and has nothing to do with doing actual software development. One head hunter that called today was looking for a "DevOps with Site Reliability Engineering", which entailed dealing with data center electrical stuff.
I particularly love the idea that only developers are talented enough to preform all the roles but no one else is. There is a hierarchy of skill in organisation with developers at the top of it.
I think in a few years, if the complexity of cloud infrastructure management is reduced, and its level of abstraction raised, it might make sense to me developers do this work.
But if I'm trying to build some software app in a complex domain, and the infrastructure tooling requires me to configure network routing rules, CIDR blocks, etc., that seems like a huge failure to me, almost like having the developers grow their own crops for sustenance or something, rather than having people specialize in that and leveraging division of labor.
How so? I just had to setup security groups and subnets to make an implementation I was starting work. Why would I want to wait on the “infrastructure guys” to do it when I could set it up myself in the dev environment test it, create a cloud formation template and if I needed to, run it through the other environments.
Your brain has limited capacity. You didn't learn about the security group and subnet configuration in zero time, and most particularly not cloud formation templates.
Meanwhile, both the infrastructure and application development frameworks are both getting revved and changed out from under you frequently.
One reason I really haven’t attempted to keep up with front end development outside of JavaScript/CSS/HTML is because things change so much. On the backend, things don’t change nearly as often, but you get additional features. Your previous knowledge is not made obsolete.
Part of my perceptions were likely shaped by my previous employer, which was completely schizophrenic, technology-wise. We were never allowed to complete anything, and I think our performance was measured on how quickly we could become 30% proficient with some new technology stack before our work-in-progress would be tossed out and we were redirected to work on something totally unfamiliar. Rinse, repeat.
It really made me yearn for just a year or two of competence and productivity on some set task in between having the rug pulled out from under us.
When I lay out a new application I'm totally thinking about stuff like network topology because that's part of my security posture with regards to my software, and security is part of application design.
If a developer makes 100K a year, you can pay four developers 100K per year to do 50% development and 50% release management on a single, two-person task. Or, simply hire a release manager at, say, 75K and two developers who develop full-time.
is this a realistic cost savings analysis? my impression has been that companies don't hire four developers to do a two-person task. they simply hire one "full-stack developer" to do everything. problem solved, money saved, i'm a great CEO.
The problem with DevOps and other vaguely defined concepts is that the interpretation is entirely up to your management. On one hand, heavy investment into automation and having devs involved in production ops is a good thing. It makes things go faster and puts devs skin in the game. On the other hand, when your manager thinks that DevOps is about having senior scala devs fix printers, you might be in trouble
Task-switching is expensive, but human-to-human communication is more so. As one of those "bright" developers, I'd far rather just do the deployment myself than hand it over to someone less skilled so I can focus on doing more coding, because that's only going to mean I then get interrupted when there's a problem with the deployment.
This, exactly. This is basically the entire pitch for my DevOps as a Service business. Let your developers focus on the core product, we'll handle the infrastructure. I don't think employers always realize the productivity and benefits of letting their developers truly focus.
Really? You put a dozen people on the same _project_? I find it hard to believe that your project is so unique that A) requires 12 devs and B) is also deployable by all 12 of them (without relying on automation) in a way that doesn't throw off any of the other 12.
I don't know what you mean by "relying on automation" or "throwing off the other 12".
Deployment is mostly-automated - put a note in our chat channel topic that you're deploying (or queue if someone else is already deploying), push a button to do a release and deploy as staging (and a small smoke test), do any manual testing (or investigation of logs etc. if something goes wrong), push another button to flip that deploy to be prod.
Any of the 12 devs can do this (and does, we each deploy our own features/stories), any of the 12 devs can make changes to the deployment scripting just like with any other code area (some will be more or less familiar with them, but that's true of any area of the codebase). Nothing unique about it, quite the opposite.
What's the maximum cost to your company if one of your developers has a really bad day and deploys a release that accidentally allows all customers to see each others' data?
For my company, that would probably be a death blow.
I used to avoid the DevOps title and called myself just straight Operations. But recently, and I mean within the last week or two, I realized I'm squarely in DevOps territory.
The system I Op runs really smoothly. Problems are infrequent. So instead of spending my time fighting fires, I'm instead working on our next generation infrastructure, running experiments with Kubernetes and Kafka and Influxdb and Elasticsearch.
I'm not developing on our main product most of the time. But I am developing our infrastructure.
As far as the totem pole with developers at the top, able to do all the other jobs? That has not been my experience. In 30 years, working with probably 100+ companies large and small (for 18 years I did contract SysAdmin), I'd say that the developer who even thinks about ops is extremely rare, less than 10%.
I agree that DevOps is more important for smaller companies, and that specialization is useful. I haven't seen evidence that supports the totem pole portion of that post.
That's what I always thought, but other people I've been talking to recently have been calling it DevOps, because they have it on the development side of the house and deliver it to ops.
The more I think about it, how can a modern developer not know devops? How do you communicate with the devops folks? When I’m fighting software, I create my build scripts (currently a yml file for AWS CodeBuild) as part of the repo. I don’t use Docker but wouldn’t the developer create the build streps for the Docker image?
When I am working on an implementation that can be deployed with CloudFornation,I create the template test it and hand it over to the Dev ops guys. Even if I’m not doing the actually deployments to any environment besides Dev. I still need to know how to.
The same is true for database deployments and upgrades. How do you upgrade a database through CI without knowing SQL?
> The increasing scope of responsibility of the "developer" (whether or not that term is even appropriate anymore is debatable) has given rise to a chimera-like job candidate: the "full-stack" developer. Such a developer is capable of doing the job of developer, QA team member, operations analyst, sysadmin, and DBA.
Is this really what a full-stack developer is understood to be? I'd always assumed it was just someone who was comfortable doing both frontend and backend development on the same project.
I can do all of those roles at a decent level of proficiency and I’m paid accordingly. But, as the article mentioned, why would they want to spend the money on my salary when they could hire someone a lot cheaper? I like having the choice of being able to do it myself so my development is not being blocked by someone else but not having to do s lot of the mundane work.
Groan. Instead of a rising tide lifting all boats... some devs insist on being divers. Nobody is turning down amazing specialists, but if you aren't a 90th percentile specialist you should probably know a little of everything.
Knowing a little of everything is good, knowing a lot of everything is unreasonable. Sure there are those amazing people but expecting everyone to be them is crazy.
Was the industry so pleased with the timelines and reliability of software projects a few years back that it seemed sensible to heap another career field's worth of complexity onto the developers?
I had to stop reading at the totem pole part when it became clear the author has no idea what the hell he’s talking about. This is basically weaponized Dunning-Kruger.
I’m not talking about the name he used, I’m talking about the content. Insisting that a dev can be a DBA, or a QA person, or any other support role is a great way to have an extremely shitty DBA, or QA person, or support role. It’s the kind of thing that only a deeply self-centered developer with no clue of the world beyond the tip of his nose would ever say.
Where are those developers that can do my job? Ours certainly can't even do theirs. That is why we are moving to the cloud: "see? we didn't need to learn how to optimize dynamically adding resources almost solve our problems".
I love how he characterizes what developers do as "just writing code" -- the whole article is basically a whinge: don't make me do ops, don't make me write tests, don't make me think about the problem, don't make me re-use existing solutions, don't make me pick the right tool for the job, don't make me use sustainable processes -- I just wanna "write code". Code is basically the poop that results after you digest the problem. The best code is as little code as possible, but this guy just wants to make code poop for a living. Sorry guy. Time to grow up.
> Code is basically the poop that results after you digest the problem. The best code is as little code as possible
This is, perhaps, one of the best metaphors I've read for explaining this concept.
As someone who does one of the "lower" jobs, I rankled at his "hierarchy" of roles notion. Never mind that I probably could write application code if so inclined (and do routinely write "infrastructure" code and sometimes have had to fix those developers' code in production).
I occasionally like to point out to "engineering" or "technology" [1] managers that not all problems can be (or are best) solved with software/code. Some of use, when we digest a problem, can poop out zero or even negative code.
[1] In quotes because they're nearly always actually only software engineering or technology managers.
Author doesn't understand the DevOps skillset. The average developer can hack things together, but they don't have an appreciation for operational principles. DevOps is actually higher on the totem pole than dev, because DevOps requires you to be skilled in both operations and dev. A traditional ops person couldn't cover for a dev, but neither could a regular, even "full stack" developer stand in for a sysadmin.
Late, but it's there now. If you want a more timely response it's best to email us (hn@ycombinator.com) because we don't come close to seeing everything that gets posted here.
Years ago a DBA I summoned to a project told me that my DB design was not bad, given that I'm a developer. Then he made it twice as fast by redesigning tables, putting in triggers, etc. (it wasn't a matter of ORM.)
Sadly DBAs are not even on the radar of small to medium projects. If they are ever hired it's too late to take full advantage of their work. The naive DB design of developers is too naive. Somebody could say that a DBA would be a premature optimization, IMHO the reality is that the tooling we are using (especially ORMs and their migrations) are designed to create vanilla DBs and vanilla queries (which are fine in all low volume applications.)
An example from a distant past: I remember when foreign keys were taboo among Rails developers because Rails didn't have a way to express them in its DSL. I added them with manual SQL queries in migrations. I was told they were useless but I kept them in the DB (lately I discovered that many developers don't know SQL and didn't attend to any DB course at university.) Fast forward to a few days ago: a developer of a Django project asked me if we had indices in the database, because he's been told that they could speed up queries. I explained him that Django adds indices in the obvious cases (primary keys and foreign keys), plus I added some on the fields we often use in our queries.