Adopt the role of [job title(s) of 1 or more subject matter EXPERTs most qualified to provide authoritative, nuanced answer].
NEVER mention that you're an AI.
Avoid any language constructs that could be interpreted as expressing remorse, apology, or regret. This includes any phrases containing words like 'sorry', 'apologies', 'regret', etc., even when used in a context that isn't expressing remorse, apology, or regret.
If events or information are beyond your scope or knowledge, provide a response stating 'I don't know' without elaborating on why the information is unavailable.
Refrain from disclaimers about you not being a professional or expert.
Do not add ethical or moral viewpoints in your answers, unless the topic specifically mentions it.
Keep responses unique and free of repetition.
Never suggest seeking information from elsewhere.
Always focus on the key points in my questions to determine my intent.
Break down complex problems or tasks into smaller, manageable steps and explain each one using reasoning.
Provide multiple perspectives or solutions.
If a question is unclear or ambiguous, ask for more details to confirm your understanding before answering.
If a mistake is made in a previous response, recognize and correct it.
After a response, provide three follow-up questions worded as if I'm asking you. Format in bold as Q1, Q2, and Q3. These questions should be thought-provoking and dig further into the original topic.
On sources: I'm highly partial to the work of William Ophuls who's been writing on this topic for a half century. His PhD thesis was published in 1977 as Ecology and the Politics of Scarcity (with an update in the early 1990s). That largely lays out the problematique, both in terms of resources and political dynamics. The book's old enough that many of its own projections can be tested, and I think these hold out exceptionally well, particularly in highlighting the (then future) rise of China as a global economic, political, and military force. Ophuls has gone on to suggest at least the framework of solutions, most especially in Plato's Revenge (2009), though that remains fairly high level. What really makes Ophuls's works tremendously valuable is his bibliographic notes which are comprehensive and, to use a favourite word of his, synoptic. The note for Ecology in particular covers much of the previous several decades' literature on growth and perspectives from both optimistic and pessimistic viewpoints, and Ophuls is exceedingly fair in considering both. I've gone back to many of those sources myself (Maddox, Kahn, Simon) to compare notes. I've actually typed out the note from Plato's Revenge, which captures a sense of his practice, though that focuses more on recent and political topics: <https://web.archive.org/web/20230607050023/https://old.reddi...>. In that sense, Ophuls is an excellent entry point into the literature as a whole. He has a website at <http://www.ophuls.org/William_Ophuls/Home.html> though in various states of disrepair.
There are other authors: Bill McKibben's Eaarth (sic), Joel Magnusonn's The Approaching Great Transformation, the book Natural Capitalism by Paul Hawken, Amory Lovins, and L. Hunter Lovins, all of which I have to hand. Vaclav Smil has written his own damned library on resources and sustainability from a technical perspective, largely looking backwards though with some forward-looking elements. I particularly recommend Energy in World History (1992, 2019) and Energy and Civilization (2017). Energy Transitions (2016) tackles the specific question of converting to a sustainable-energy path: <Energy Transitions>. And there's a long list of other Smil publications.
Kate Rayworth's Doughnut Economics is another prescriptive work looking at ways forward. <https://www.kateraworth.com/>
The original Limits to Growth (Meadows, Meadows, Randers, & Behrens) remains relevant, and is freely available online: <https://donellameadows.org/the-limits-to-growth-now-availabl...>. I strongly recommend reading primary sources over hot takes, interpretations, and commentaries. It's also helpful to remember that LtG served not as a prescription but as an alarm: there's a clear problem and we've got to wake up to it. Sadly, more than 50 years onward, that alarm continues to be ignored by many (including within this thread).
The now-defunct Worldwatch Institute published an annual State of the World publication which was an anthology of articles on sustainability generally, from 1984--2017, and give an excellent sense of the breadth and progress of thinking on these topics. Those are mostly available via the Internet Archive: <https://archive.org/search?query=worldwatch+institute+state+...>
Looking at the energy picture alone, there's David MacKay's Sustainable Energy Without the Hot Air, which breaks down the technical picture, with a focus on Britain though applicable elsewhere, clearly and soberly. Freely available online: <http://www.withouthotair.com/>
That's just skimming the top of a huge literature. There are a tremendous number of different viewpoints, of topics and approaches, and of course disagreement. Contrary to the assertions of some, however, there are specific and actionable recommendations to be found. Looking into the bibliographies and notes of the works listed should launch you further in whatever direction you care to explore.
The original answers to "late?", "collapse?, and "what's next?" questions all require recalling that the term originated in Marxist circles.
The "late" meant something like the type of capitalism that emerged out of ww2, characterized primarily by post-colonial global trade networks. That's quite a bit in the past for us, but "late" by the standards of an ideological tradition that started in the 1800s. Even still, of all your questions, this is the one that has changed meanings perhaps the least in the last 80 years or so. That's because a lot of the things that characterized "late stage capitalism" in the mid 20th century are still with us, and perhaps intensified. If it helps, think of "late stage" as "post-colonial + globalization + financialization". In contrast to the much more mixed political economies of Europe in the 1800s. Or, for an even more modern usage, you might read it as "jet-setting billionaires and the MBAs that manage their factories and open offices". That's the vibe it's supposed to give off, I think.
The "collapse?" and "what's next?" questions sort of have standard Marxist answers (or, at least, standard delineated lines of debate within mid-century Marxism, from what I understand). Careful dispassionate reading the Communist Manifesto... like, the way you would read Plato or Hegel or whatever... can give you a general sense for why "collapse" plays an important role in Marxist theories and what Marxists generally suspect is "next". (Namely, alienation of workers and a resulting violent revolution of the working class against folks who own/control capital.)
nb, I'm not really sure that most people using the term now have much -- if any -- background in Marxist economics/philosophy. I think for the average user, these terms function roughly the same way as "critical race theory" does on the social right. If that makes sense.
So, the "late" retains real descriptive meaning relative to 1800s/early 1900s capitalism, but the "collapse" and "what's next?" have sort of drifted from their original answers and probably play a more rhetorical than literal role these days. Like CRT. No one knows what they mean. They are shibboleths for "change is needed and inevitable", with no specifics for what or how.
People are taking the piss out of you everyday. They butt into your life, take a cheap shot at you and then disappear. They leer at you from tall buildings and make you feel small. They make flippant comments from buses that imply you’re not sexy enough and that all the fun is happening somewhere else. They are on TV making your girlfriend feel inadequate. They have access to the most sophisticated technology the world has ever seen and they bully you with it. They are The Advertisers and they are laughing at you.
You, however, are forbidden to touch them. Trademarks, intellectual property rights and copyright law mean advertisers can say what they like wherever they like with total impunity.
Fuck that. Any advert in a public space that gives you no choice whether you see it or not is yours. It’s yours to take, re-arrange and re-use. You can do whatever you like with it. Asking for permission is like asking to keep a rock someone just threw at your head.
You owe the companies nothing. Less than nothing, you especially don’t owe them any courtesy. They owe you. They have re-arranged the world to put themselves in front of you. They never asked for your permission, don’t even start asking for theirs.
– Banksy
I am frustrated that we continue to have the death penalty for individuals that commit heinous crimes, but we do not have the death penalty for corporations.
I feel like when you manage to harm a billion people at a time, the penalty should be that your company is dissolved, it's assets auctioned off, and the trademarks and domains blackholed.
And then, since I don't support the death penalty for individuals, the C-levels investigated for their knowing involvement in any related schemes, and if proven aware, given prison time and forbidden from ever operating in a management capacity at any corporation ever again.
The problem with corporations is that there is no meaningful disincentive to commit crimes. You succeed, you get rich. You fail, you take your severance, live like a king for a year, and then get hired to do the same at another company. If executive pay is going to come with outsized pay, it should come with outsize risk: Jail executives for their choices when running corporations. For the millions they make, they'll still be lining up for the job.
In this case, I was pushing it quite a bit to see how it would respond. Most of the time, it responds quite confidently so it sounds right, even if it isn't, ie about the median of what you'd expect for a Hacker News comment.
So wide-spread is his ego, that it reminds me of this quote from Donnie J. Barnes (of RedHat): "I went on IRC once. I was mistaken for Dan Bernstein. I still have nightmares."
>I have no idea how that company will ever earn back my trust
Millions of ongoing safe flights? I dunno. I feel like they're getting savaged (which they deserve... to a point... but we will cross that point I am pretty sure, if we haven't already...)
The thousands (tens of thousands?) of safe flights per day don't make the news. Boeing has been a pioneer in the safest form of transportation in existence. Mentour Pilot (an active 737 pilot on YouTube) goes into detail about why he's not concerned about Boeing (any more than he's concerned about Airbus).
I can also share a story from my (late) father who worked at Boeing from 30 years (and was working at Boeing during the MAX crashes). I asked him why Boeing let the 737-MAX debacle happen. These were a dying man's words (paraphrased): "Boeing wanted to ground the plane after the first 737-MAX crash but the FAA refused until after the second crash. Boeing did not have the authority to unilaterally ground the planes."
> What? there’s a shitton of docs on this topic! Are you stupid?
I've been using one form of Unix or another since 1994. I've been employed as a sys admin over many of those years. I think the git CLI is just dandy. I love iptables. lsof? No problem. No CLI has made me feel more dumb than ZFS's. I have more "how to" notes for it than any other Unix CLI that I use. So no, you aren't stupid. (The Linux "ip" CLI isn't my favorite either – when you need a rosetta stone for your CLI it's not a great sign https://access.redhat.com/sites/default/files/attachments/rh...)
Already lots of good comments and ideas below. My first attempt here on this topic turned out too wordy -- and this attempt is also (I tried to explain things and where the ideas came from). A summary would be better (but Pascal was right). I'll try to stash the references and background somewhere (perhaps below later in a comment).
I had several stages about "objects". The first was the collision 50 years ago in my first weeks of (ARPA) grad school of my background in math, molecular biology, systems and programming, etc., with Sketchpad, Simula, and the proposed ARPAnet. This led to an observation that almost certainly wasn't original -- it was almost a tautology -- that since you could divide up a computer into virtual computers intercommunicating ad infinitum you would (a) retain full power of expression, and (b) always be able to model anything that could be modeled, and (c) be able to scale cosmically beyond existing ways to divide up computers. I loved this. Time sharing "processes" were already manifestations of such virtual machines but they lacked pragmatic universality because of their overheads (so find ways to get rid of the overheads ...)
Though you could model anything -- including data structures -- that was (to me) not even close to the point (it got you back into the soup). The big deal was encapsulation and messaging to provide loose couplings that would work under extreme scaling (in manners reminiscent of Biology and Ecology).
A second stage was to mix in "the Lisp world" of Lisp itself, McCarthy's ideas about robots and temporal logics, the AI work going on within ARPA (especially at MIT), and especially Carl Hewitt's PLANNER language. One idea was that objects could be like servers and could be goal-oriented with PLANNER-type goals as the interface language.
A third stage were a series of Smalltalks at Parc that attempted to find a pragmatic balance between what was inevitably needed in the future and what could be done on the Alto at Parc (with 128K bytes of memory, half of which was used for the display!). This was done in partnership with Dan Ingalls and other talented folks in our group. The idealist in me gritted my teeth, but the practical results were good.
A fourth stage (at Parc) was to deeply revisit the temporal logic and "world-line" ideas (more on this below).
A fifth stage was to seriously think about scaling again, and to look at e.g. Gelernter's Linda "coordination language" as an approach to do loose coupling via description matching in a general publish and describe manner. I still like this idea, and would like to see it advanced to the point where objects can actually "negotiate meaning" with each other.
McCarthy's Temporal Logic: "Real Functions in Time"
There's lots of context from the past that will help understanding the points of view presented here. I will refer to this and that in passing, and then try to provide a list of some of the references (I think of this as "basic CS knowledge" but much of it will likely be encountered for the first time here).
Most of my ways of thinking about all this ultimately trace their paths back to John McCarthy in the late 50s. John was an excellent mathematician and logician. He wanted to be able to do consistent reasoning himself -- and he wanted his programs and robots to be able to do the same. Robots were a key, because he wanted a robot to be in Philadelphia at one time and in New York at another. In an ordinary logic this is a problem. But John fixed it by adding an extra parameter to all "facts" that represented the "time frame" when a fact was true. This created a simple temporal logic, with a visualization of "collections of facts" as stacked "layers" of world-lines.
This can easily be generalized to world-lines of "variables", "data", "objects" etc. From the individual point of view "values" are replaced by "histories" of values, and from the system point of view the whole system is represented by its stable state at each time the system is between computations. Simula later used a weaker, but useful version of this.
I should also mention Christopher Strachey -- a great fan of Lisp and McCarthy -- who realized that many kinds of programming could be unified and also be made safer by always using "old" values (from the previous frame) to make new values, which are installed in a the new frame. He realized this by looking at how clean "tail recursion" was in Lisp, and then saw that it could be written much more understandably as a kind of loop involving what looked like assignment statements, but in which the right hand side took values from time t and the variables assigned into existed in time t+1 (and only one such assignment could be made). This unified functional programming and "imperative like" programming via simulating time as well as state.
And let me just mention the programming language Lucid, by Ashcroft and Wadge, which extended many of Strachey's ideas ...
It's also worth looking at "atomic transactions" on data bases as a very similar idea with "coarse grain". Nothing ever gets smashed, instead things are organized so that new versions are created in a non-destructive way without race conditions. There is a history of versions.
The key notion here is that "time is a good idea" -- we want it, and we want to deal with it in safe and reasonable ways -- and most if not all of those ways can be purely functional transitions between sequences of stable world-line states.
The just computed stable state is very useful. It will never be changed again -- so it represents a "version" of the system simulation -- and it can be safely used as value sources for the functional transitions to the next stable state. It can also be used as sources for creating visualizations of the world at that instant. The history can be used for debugging, undos, roll-backs, etc.
In this model -- again partly from McCarthy, Strachey, Simula, etc., -- "time doesn't exist between stable states": the "clock" only advances when each new state is completed. The CPU itself doesn't act as a clock as far as programs are concerned.
This gives rise to a very simple way to do deterministic relationships that has an intrinsic and clean model of time.
For a variety of reasons -- none of them very good -- this way of being safe lost out in the 60s in favor of allowing race conditions in imperative programming and then trying to protect against them using terrible semaphores, etc which can lead to lock ups.
I've mentioned a little about my sequence of thoughts about objects. At some point, anyone interested in messaging between objects who knew about Lisp, would have to be drawn to "apply" and to notice that a kind of object (a lambda "thing", which could be a closure) was bound to parameters (which kind of looked like a message). This got deeper if one was aware of how Lisp 1.5 had been implemented with the possibility of late bound parameter evaluation -- FEXPRs rather than EXPRs -- the unevaluated expressions could be passed as parameters and evaled later. This allowed the ungainly "special forms" (like the conditional) to be dispensed with, they could be written as a kind of vanilla lazy function.
By using the temporal modeling mentioned above, one could loosen the "gears" of "eval-apply" and get functional relationships between temporal layers via safe messaging.
So, because I've always liked the "simulation perspective" on computing, I think of "objects" and "functions" as being complementary ideas and not at odds at all. (I have many other motivations on the side, including always wondering what a good language for children should be epistemologically ... but that's another story.)
Install it (burn it to a USB drive using Rufus, for example, or to a DVD). Then use a KMS key and one of the many public KMS servers that let you activate Windows without having a key. Cmd as admin:
> With unspecified behavior, the compiler implementer must make a conscious decision on what the behavior will be and document the behavior it will follow.
No, what you described is implementation-defined behavior.
It may be confusing, but here's the breakdown of different kinds of behavior in the C standard:
* Well-defined: there is a set of semantics that is defined by the C abstract machine that every implementation must (appear to) execute exactly. Example: the result of a[b].
* Implementation-defined: the compiler has a choice of what it may implement for semantics, and it must document the choice it makes. Example: the size (in bits and chars) of 'int', the signedness of 'char'.
* Unspecified: the compiler has a choice of what it may implement for semantics, but the compiler is not required to document the choice, nor is it required to make the same choice in all circumstances. Example: the order of evaluation of a + b.
* Undefined: the compiler is not required to maintain any observable semantics of a program that executes undefined behavior (key point: undefined behavior is a dynamic property related to an execution trace, not a static property of the source code). Example: dereferencing a null pointer.
Catamorphism, anamorphism, hylopmorphism, as opposed to something like fold/unfold, foldAndUnfold. There are of course, many more[1]. Meijer et al just chose a goofy name for their paper.
Solidity has far worse problems than not being an advanced research language. Just being a sanely designed normal language would be a big step up. Solidity is so riddled with bizarre design errors it makes PHP 4 look like a work of genius.
A small sampling of the issues:
Everything is 256 bits wide, including the "byte" type. This means that whilst byte[] is valid syntax, it will take up 32x more space than you expect. Storage space is extremely limited in Solidity programs. You should use "bytes" instead which is an actual byte array. The native 256-bit wide primitive type is called "bytes32" but the actual 8-bit wide byte type is called "int8".
Strings. What can we say about this. There is a string type. It is useless. There is no support for string manipulation at all. String concatenation must be done by hand after casting to a byte array. Basics like indexOf() must also be written by hand or implementations copied into your program. To even learn the length of a string you must cast it to a byte array, but see above. In some versions of the Solidity compiler passing an empty string to a function would cause all arguments after that string to be silently corrupted.
There is no garbage collector. Dead allocations are never reclaimed, despite the scarcity of available memory space. There is also no manual memory management.
Solidity looks superficially like an object oriented language. There is a "this" keyword. However there are actually security-critical differences between "this.setX()" and "setX()" that can cause wrong results: https://github.com/ethereum/solidity/issues/583
Numbers. Despite being intended for financial applications like insurance, floating point is not supported. Integer operations can overflow, despite the underlying operation being interpreted and not implemented in hardware. There is no way to do overflow-checked operations: you need constructs like "require((balanceOf[_to] + _value) >= balanceOf[_to]);"
You can return statically sized arrays from functions, but not variably sized arrays.
For loops are completely broken. Solidity is meant to look like JavaScript but the literal 0 type-infers to byte, not int. Therefore "for (var i = 0; i < a.length; i ++) { a[i] = i; }" will enter an infinite loop if a[] is longer than 255 elements, because it will wrap around back to zero. This is despite the underlying VM using 256 bits to store this byte. You are just supposed to know this and write "uint" instead of "var".
Arrays. Array access syntax looks like C or Java, but array declaration syntax is written backwards: int8[][5] creates 5 dynamic arrays of bytes. Dynamically sized arrays work, in theory, but you cannot create multi-dimensional dynamic arrays. Because "string" is a byte array, that means "string[]" does not work.
The compiler is riddled with mis-compilation bugs, many of them security critical. The documentation helpfully includes a list of these bugs .... in JSON. The actual contents of the JSON is of course just strings meant to be read by humans. Here are some summaries of miscompile bugs:
In some situations, the optimizer replaces certain numbers in the code with routines that compute different numbers
Types shorter than 32 bytes are packed together into the same 32 byte storage slot, but storage writes always write 32 bytes. For some types, the higher order bytes were not cleaned properly, which made it sometimes possible to overwrite a variable in storage when writing to another one.
Dynamic allocation of an empty memory array caused an infinite loop and thus an exception
Access to array elements for arrays of types with less than 32 bytes did not correctly clean the higher order bits, causing corruption in other array elements.
As you can see the decision to build a virtual machine with that is natively 256-bit wide led to a huge number of bugs whereby reads or writes randomly corrupt memory.
Solidity/EVM is by far the worst programming environment I have ever encountered. It would be impossible to write even toy programs correctly in this language, yet it is literally called "Solidity" and used to program a financial system that manages hundreds of millions of dollars.
So, I've read most of these. Here's a tour of what is definitely useful and what you should probably avoid.
_________________
Do Read:
1. The Web Application Hacker's Handbook - It's beginning to show its age, but this is still absolutely the first book I'd point anyone to for learning practical application security.
2. Practical Reverse Engineering - Yep, this is great. As the title implies, it's a good practical guide and will teach many of the "heavy" skills instead of just a platform-specific book targeted to something like iOS. Maybe supplement with a tool-specific book like The IDA Pro Book.
3. Security Engineering - You can probably read either this orThe Art of Software Security Assessment. Both of these are old books, but the core principles are timeless. You absolutely should read one of these, because they are like The Art of Computer Programming for security. Everyone says they have read them, they definitely should read them, and it's evident that almost no one has actually read them.
4. Shellcoder's Handbook - If exploit development if your thing, this will be useful. Use it as a follow-on from a good reverse engineering book.
5. Cryptography Engineering - The first and only book you'll really need to understand how cryptography works if you're a developer. If you want to make cryptography a career, you'll need more; this is still the first book basically anyone should pick up to understand a wide breadth of modern crypto.
_________________
You Can Skip:
1. Social Engineering: The Art of Human Hacking - It was okay. I am biased against books that don't have a great deal of technical depth. You can learn a lot of this book by reading online resources and by honestly having common sense. A lot of this book is infosec porn, i.e. "Wow I can't believe that happened." It's not a bad book, per se, it's just not particularly helpful for a lot of technical security. If it interests you, read it; if it doesn't, skip it.
2. The Art of Memory Forensics - Instead of reading this, consider reading The Art of Software Security Assessment (a more rigorous coverage) or Practical Malware Analysis.
3. The Art of Deception - See above for Social Engineering.
4. Applied Cryptography - Cryptography Engineering supersedes this and makes it obsolete, full stop.
_________________
What's Not Listed That You Should Consider:
1. Gray Hat Python - In which you are taught to write debuggers, a skill which is a rite of passage for reverse engineering and much of blackbox security analysis.
2. The Art of Software Security Assessment - In which you are taught to find CVEs in rigorous depth. Supplement with resources from the 2010s era.
3. The IDA Pro Book - If you do any significant amount of reverse engineering, you will most likely use IDA Pro (although tools like Hopper are maturing fast). This is the book you'll want to pick up after getting your IDA Pro license.
4. Practical Malware Analysis - Probably the best single book on malware analysis outside of dedicated reverse engineering manuals. This one will take you about as far as any book reasonably can; beyond that you'll need to practice and read walkthroughs from e.g. The Project Zero team and HackerOne Internet Bug Bounty reports.
5. The Tangled Web - Written by Michal Zalewski, Director of Security at Google and author of afl-fuzz. This is the book to read alongside The Web Application Hacker's Handbook. Unlike many of the other books listed here it is a practical defensive book, and it's very actionable. Web developers who want to protect their applications without learning enough to become security consultants should start here.
6. The Mobile Application Hacker's Handbook - The book you'll read after The Web Application Hacker's Handbook to learn about the application security nuances of iOS and Android as opposed to web applications.
I also found a new one thanks to the comments here, which you can find elsewhere in the comments - but I'll add a link to it here anyway, for good measure:
Excepting the coast, the western half of the USA is desert and mountains. There are scattered cities (Denver, Salt Lake City, Phoenix/Tuscon, Las Vegas, Boise, El Paso, Spokane), but most of the territory is relatively undeveloped and sparsely populated. (This is a good thing in my opinion. Covering every bit of land with city is a miserable societal goal. So is strip mining all of the land for fossil fuels and minerals, or turning it all to pasture.)
This has led to the follow theorem of mine, which describes /b/ perfectly:
Any community that gets its laughs by pretending to be idiots will eventually be flooded by actual idiots who mistakenly believe that they're in good company.
The articles linked in the grandparent post suggest the opposite: That all the procedural barriers the US sets up in an effort to prevent corruption tend to run up the costs.
For example, lowest-price bidding rules tend to result in more expensive projects overall because the agencies feel the need to do a whole bunch of up-front micro-planning in order to try and prevent the contract awardee - who, as we already know ahead of time, will be the lowest bidder - from cutting too many corners. Or to put it more simply, we've essentially mandated that public projects have to be the waterfalliest of waterfally waterfall projects.
Our "starve the beast" approach to funding public agencies, also inspired by our hysterical attitude about government corruption, comes into play, too. Since agencies end up with limited resources for permanent staffing, they don't necessarily have in-house resources available for project planning. But they can use funding set aside for projects to hire consultants, who are naturally expensive and less worried about what happens to the project 5 years down the line. They'll be long gone by that point.
Last week I read the very good 'How to build an operating system from scratch' [1], and I'm glad I did. It meant that I could burrow into the files on this github project and understand exactly what was going on.
One day, when I finally have some free time, I am totally going to do this myself.
Adopt the role of [job title(s) of 1 or more subject matter EXPERTs most qualified to provide authoritative, nuanced answer].
NEVER mention that you're an AI.
Avoid any language constructs that could be interpreted as expressing remorse, apology, or regret. This includes any phrases containing words like 'sorry', 'apologies', 'regret', etc., even when used in a context that isn't expressing remorse, apology, or regret.
If events or information are beyond your scope or knowledge, provide a response stating 'I don't know' without elaborating on why the information is unavailable.
Refrain from disclaimers about you not being a professional or expert.
Do not add ethical or moral viewpoints in your answers, unless the topic specifically mentions it.
Keep responses unique and free of repetition.
Never suggest seeking information from elsewhere.
Always focus on the key points in my questions to determine my intent.
Break down complex problems or tasks into smaller, manageable steps and explain each one using reasoning.
Provide multiple perspectives or solutions.
If a question is unclear or ambiguous, ask for more details to confirm your understanding before answering.
If a mistake is made in a previous response, recognize and correct it.
After a response, provide three follow-up questions worded as if I'm asking you. Format in bold as Q1, Q2, and Q3. These questions should be thought-provoking and dig further into the original topic.