Would it be better if GPU drivers could compile open-spec bytecode and upload the result to the GPU to do all of the computation? This way OpenGL may be used as a library, shipped with the application.
Sadly, the translation from SPIR-V to system-dependent machine code can still be buggy. Although hopefully most of the optimisation will take place at the SPIR-V level, which, as I understand it, is pretty similar to LLVM. That should enable reuse of thoroughly debugged code, instead of each vendor maintaining their own full compiler.
> Sadly, the translation from SPIR-V to system-dependent machine code can still be buggy.
This is true, but it eleminates at least one one the points where things can go wrong. Additionally this approach has the advantage that developers (with some practice) can read the SPIR-V "assembly" code to make sure it is correct. With existing solutions it was already hard to get and interprete the intermediate code to find out whether the problem is in the frontend or backend.
I just came here to write that I think this is THE right approach for wysiwyg editor. Having a full model of the document tree rather and working from that rather than sanitizing `contenteditable`. I expect this approach to automatically solve many problems that other wysiwyg editors have.
I like and use Draft, but the separation of a document model from contenteditable is a feature of the editor frameworks mentioned above (as well as others like slate) and is not unique to Draft, or this implementation of an editor around Draft.
It handles the more complex (e.g. CJK) input methods. The alternative is a hidden text field and you have to put more effort into emulating the cursor (e.g. up arrow/down arrow). Regularly re-generating the underlying markup removes a large class of edge cases and cross-browser incompatibilities in exchange for not being able to use the browser's native undo. It's still a lot harder than you'd expect but it seems to be working for draft and prosemirror better than my attempt to normalize the cross browser differences and rely on the browser.
P.S. I also came into this thread to endorse Prosemirror.
No, light is not different. If you want to adjust for "red/blue shift", you may use special relativity, and in special relativity you have the conservation of 4-momentum, which will give you the same result.
Ah, if my understanding is correct then this makes sense.
In my initial argument I bring in relativity and the invariant speed of light. What I neglected is everything else in relativity, like lorentz transformations.
I mentioned a discontinuity between invariant speed of light and summing velocities to calculate momentum. The discontinuity is distance. Velocity is distance x time, but under lorentz transformations distance is relative. The discontinuity is resolved if you include length contraction (and time dilation).
Humans use tools to find patterns. Typically you don't read a dataset and look for patterns. We build optimisations, regression models, search algos, etc. We also build functions that build functions from data. Yes, we know how AlphaGo wins. It's architecture is known and understood, as is it's training and how to replicate it. It doesn't mean you can predict what it's next move will be nor "why" it made it but it does what it was programmed to do.
This is all just an expression of innately human capability. Bigger and faster does not mean different in kind.
From what I see, the only thing that can help with post-mortem debugging is changing the promise so that it doesn't catch the exception when calling the initialiser or handler. You can still do whatever you could with Promises (but you have to do `Promise.reject(...)` or `reject(...)` instead of `throw ...`), but it's not going to conform with the standard. Many libraries are probably going to break if you drop-in a Promise implemented this way.
Edit: I didn't explain why this change is important: if the Promise is catching the exception, the Execution Context where exception was thrown is lost and (potentially) irrecoverable, the program has to crash there and now if you want to debug the reason for the exception. This is a property of the Promise standard, changes to debugging tools or environment won't help.
You could make Node aware of promises, and capture the execution context with promise exception, but I don't think Promises are special enough.
I recently watched Herb Sutter's talk "Leak-Freedom in C++", described in the article [1], and I can't help but notice that C++ community has a strong bias against garbage collection. ... And then he describes the very problem you can't solve without GC: memory management for graphs with cycles. Solving this problem is equivalent to implementing GC. Of course, having your own specialized GC may help, but you may also benefit from your own memory allocation.
Why can't you acknowledge that there are problems that have GC as the only and best solution?
(Note: it's possible to mix memory management approaches, e.g only pointers to graph nodes are being garbage collected and everything else has a nice hierarchical ownership.)
>I can't help but notice that C++ community has a strong bias against garbage collection. [...]
>Why can't you acknowledge that there are problems that have GC as the only and best solution?
Your prelude and the followup question is not well-formed.
C++ programmers do not have a bias against GC as a specific problem-solving technique. In fact, expert C++ programmers can embrace GC so much that they can write an entire virtual machine[1] with GC and a DSL[2] for that vm that takes advantage of memory safety. Both the CLR vm and the (original) C# compiler were written by C++ programmers.
What the C++ community doesn't want is GC in the C++ base language itself or the standard runtime. That's a very different concept from a generalized "C++ bias against GC".
In other words, the following is unacceptable:
std::string x = "Hello, " + fullname; // cpu cycles spent on GC
Those cpu cycles spent on constantly checking if "x" is no longer reachable is cpu power that's taken away from rendering frames of a 60fps game, or computing numeric equations or high speed quantitative trading. C++ programmers don't want GC as a global runtime that you can't opt out of. Also, global GC often requires 2x-3x the memory footprint of working memory which is extremely wasteful for the resource constrained domains that C++ is often used in.
Herb Sutter's presentation is compatible with "pseudo-GC-when-you-need-it" without adding GC to the entire C++ standard runtime.
> std::string x = "Hello, " + fullname; // cpu cycles spent on GC
you are already spending cycles on memory management (if C++ allocates character data on the heap which I think it does). You are searching for free space in the heap, possibly synchronising to do that, and so on.
With a GC you may even use less cycles here! For example a copying GC could mean that you can allocate with a simple thread local bump pointer.
So in your statement you are already paying an unknown cycle cost for memory management. Why do you care if it's GC?
Your answer is probably the variance in the number of cycles - the noticeable pauses - which is a reasonable concern.
Yes, but in this case we know when the allocations will occur, and when they will be freed. If using a GC we know when they will occur, but do not know when they will be freed. Which means that at some indeterminate point in the future there will be a large temporary slowdown due to processing the GC.
This is one of the bigger reasons people use C++ and even techniques within it to explicitly collect such items at a known point in time. (Techniques such as marking items as dead in an array but still keeping them in there until the end of frame, etc)
> If using a GC we know when they will occur, but do not know when they will be freed. Which means that at some indeterminate point in the future there will be a large temporary slowdown due to processing the GC.
This just isn't true anymore. Incremental collectors can achieve pause times in the single-digit millisecond range, and concurrent collectors can achieve pause times in the single-digit to tens of microseconds range, even for super-high-allocation rate programs. There are even real-time collectors suitable for audio and other hard real-time applications.
But is it deterministic? Will the allocator always have a deterministic amount of work to do to find enough free space for your string characters? I'm not sure that's the case.
You can deactivate GC momentarily and reclaim memory when you want (end of frame, every ten frames, ...). Most of the time you can manage to write your code to minimize allocation, or make sure memory is allocated on the stack. Depending on your parameters and a little bit of profiling, you can manage to have a stable usage of memory over time and a bounded GC time.
This is technically correct (though in most gc's, if you allocate and keep a single byte, you pay for it with various barriers, etc, forever) but then, because they have good GC's that are like this, almost every GC language used allocates all the time.
So it would be more accurate to say "Most GC's will not cost you a single cycle if you and the underlying language runtime and execution environment do not allocate".
IE your statement, while true in theory, makes literally no practical difference if allocations happen all the time without you knowing or controlling it.
But in most GC languages there is nothing you can do without allocating. Creating an object is already allocating it on the heap, printing a string will also allocate.
I don't know why they downvote you. Even in Java with no value types yet, there are ways to write useful code with no or almost no managed heap allocation. And if you don't need ultra low pauses, mixing these techniques, e.g. using managed heap for 80% of non-performance critical stuff and using careful manual dynamic allocation for the rest 20% (e.g. large data buffers) typically gets you all good things at once: high throughput, low pauses, almost no GC activity when it matters and convenience of writing most of the code.
I just recently wrote my own memory allocator for my own String. Now my String is at least 2x faster then the next fastest alternative (I have tried many alternatives for C++ strings including std::string of course). String allocation can be made very fast with thread local memory pools (and you just need a basic GC to free up memory if there are a lot of strings allocated in one thread but destroyed on another one).
> For example a copying GC could mean that you can allocate with a simple thread local bump pointer.
This is equally true for explicit memory allocation. The point is that on some allocations under GC, it will have to collect garbage. And collecting garbage will tend to be more expensive than explicit frees, because it usually has to do work to discover what is garbage.
There is an advantage to universal GC: it can simplify interfaces.
For example, in C every time a `const char *` appears in an API there has to be an implicit contract about who owns the string and how to free it. A language like Rust improves on this by enforcing such contracts, but the contracts still complicate every interface.
In a GC'd language you can just forget about these issues and just treat strings (and other immutable objects) almost as if they were ordinary fixed-sized values.
"Also, global GC often requires 2x-3x the memory footprint of working memory which is extremely wasteful for the resource constrained domains that C++ is often used in."
Memory fragmentation in manual memory management often comes at a similar or higher cost, which is often "forgotten" and neglected, because it is hidden:
And fragmentation may grow over time much more than the typical GC overhead. Then you have to do a "forced GC" that is restarting the application. Often happens to my phone or browser. I'm not sure if it is more due to fragmentation or more due too plain old memory leaks.
I don't support using GC for everything. If there is a problem that can only be solved with GC, it doesn't mean that we should express our entire running programs in terms of dynamic cyclic graphs. I believe we can do much better, with much fewer resources.
>Please watch this portion of the video, and the way the speaker is pronouncing the word "collect":
Herb was deliberately using circumlocution to avoid the phrase "garbage collection" so as to not poison the well. The latter part of the video explains his reasoning for presenting it that way:
>I don't support using GC for everything. [...] I believe we can do much better, with much fewer resources.
This is a statement that sounds reasonable and balanced . How can anyone possibly disagree with it?!? The issue is that it's very difficult to take that universal wisdom and construct a real programming language that satisfies all the following properties seamlessly and elegantly:
1) language that has optional GC
2) has zero cost when not using it. This means no active tracing loop that checks object graphs and no large memory blocks reserved to allow reallocation and defragmention.
3) transparent syntax that's equally elegant for manual memory or GC in the same language
To make your GC-when-appropriate advice real, one has to show a real implementation. E.g. one can fork the GCC or LLVM compilers and add GC to it. Or fork the Java OpenJDK compiler and show how syntax can remove the GC on demand. Or construct a new language from scratch. There may be some lessons learned from the D Language backing away from GC. Also, Apple Objective-C retreated from GC as well.
> To make your GC-when-appropriate advice real, one has to show a real implmentation.
I haven't tested it closely, but I believe that Rust with this crate: https://github.com/Manishearth/rust-gc is an implementation for "GC only when you need it".
Right, but they pay a different cost there that isn't listed:
"It's not entirely compatible with the existing language"
private and multiple inheritance do not work, for example (yes, i know some of this is fixable, but some is not)
public __gc class one { int i; };
public __gc class two: private one { int h; i = h; }; //error
__gc class a {};
__gc class b {};
__gc class c: public a, public b {}; //will produce an error
At least with the real time applications that I have experience with, this is often mitigated with custom memory allocators which pre-allocate large contiguous blocks before performance critical sections and then parcels out segments of those blocks at runtime.
Which is also possible in GC enabled languages, so one can make use of a GC and then make use of similar technics for the code path that really requires them.
One does not need to ever use malloc()/free() or new/delete in C++. For instance, it is very common to use just the DATA and BSS segments in embedded code, and rely on placement-new, which is always deterministic.
Dynamic memory allocation is also a choice in C++.
Going a bit off topic, that is something you can also do in Ada or SPARK, while having a more type safe language.
My problem when I used to code in C++ was working with others in corporate projects.
It didn't matter how I strived to write modern code, making use of best practices to write safe code, there were always team members that never moved beyond C with Classes and nevermind having some processes in place to avoid precisely that.
Even nowadays, I get the feeling most developers don't make any use of static analysis.
Sadly, I don't think that's a problem that a language alone can solve. Ada, for instance, is awesome with its runtime enforcement of design contracts, but all a programmer needs to do is change the contract and chaos ensues.
I'm a huge fan of strong typing. I'm also actively trying to find ways to improve static analysis and formal methods. But, if you can't trust your developers, it all eventually breaks down.
I find that for most mature projects, a good developer needs 3-6 months of ramp-up time, which should include knowledge transfer, training, and restricted commit access. The point of this isn't to haze the developer, but instead to give him/her a chance to fully grok the intent of the mechanisms in the code base and to (hopefully) present and socialize better options in a controlled way. More and more, I've come to the conclusion that a strong team mechanic is one of the mandatory components of good software. DBC, code reviews, unit testing, formal analysis, and static analysis all help to reinforce this mechanism, but if the tribal knowledge of the team breaks down, then so ultimately will the quality of the software produced.
There are indeed cases where you have to use GC on some part of your program (as you say, some graph with cycles). However, such things are still only a small part of a program, and most of your memory allocations can still be handled without a GC.
No program language I am aware of (and I don't claim to know all) are good at saying "GC this 5%, I'll explicitly handle everything else". In C++ the "GC" bit is painful, in most other languages the "I'll do everything else" bit is painful.
Of course, there is an argument that just GCing everything is as efficient as bothering with manual memory management, so why bother at all? I'm still to be convinced about that, particularly because GC systems tend to want to use quite a bit more memory, so they have room to breathe.
> Of course, there is an argument that just GCing everything is as efficient as bothering with manual memory management
I specifically said that it should be possible to mix GC with other types of memory management.
The problem is that I often hear from C++ crowd "GC is bad, mmkay!", without mentioning that there are specific cases where you have to use/implement GC.
> Why can't you acknowledge that there are problems that have GC as the only and best solution?
GC is traditionally bundled into a language runtime in a way that imposes global costs, and cannot be opted out of. The GC interrupts your program in a way you have no direct control over and scans the global heap of all objects you have ever created.
C++'s philosophy is zero-cost, opt-in abstractions. So naturally anything that you can't opt out of is going to rub C++ programmers the wrong way.
Implementing GC within the language, in a way that is entirely opt-in, is fine. Any muttering under the breath when discussing this is, IMO, just acknowledging that most of our experiences are of "bad" GC, to the point that we almost don't want to use the same word when referring to "good" GC.
> C++'s philosophy is zero-cost, opt-in abstractions.
It's a nice philosophy, but unfortunately C++ itself often fails to deliver unless the programmer is an absolute expert on the underlying semantics. E.g. forget to pass a vector by reference or as a pointer, and you get a complete O(n) copy underneath. With other data structures, one can implement efficient copy constructors to make this pretty cheap. When an abstraction leaks details of the underlying implementation that then lead to huge non-obvious costs, that is not an abstraction.
Another example is that C++ heap allocation relies on an underlying allocator, which has overhead in managed chunks of memory, size classes, etc. The underlying allocator's alloc() and free() operations are not constant time. In fact, they are almost always outperformed by the allocation performance of GC'd languages, where bump-pointer allocation and generational collection make these overheads very, very cheap.
C++ is an expert's language, no doubt about it. But wielded properly, it does deliver on its promise pretty well, especially with recent language evolution.
C++ lets you use a bump allocator if you want. malloc()/free() are just functions that you don't pay for unless you call them. That is the point! And as others have pointed out, many common data structures let you swap out the calls to malloc()/free() for something else if you want to.
Every data structure in C++ has an allocator override. If you want to use a bump pointer allocator, you can use a bump pointer allocator. In fact, this is a very common optimization in game software during time-critical frame rendering. Allocate a large chunk of memory to act as a flywheel, then use a bump allocator against this chunk of memory while performing data-heavy crunching, then reset the bump pointer at the end of the render cycle. All memory is freed at once, without the use of a more intrusive garbage collector.
As they say, C++ gives you enough rope to hang yourself. It's pretty unapologetic about not being a language meant for everyone. But, sometimes, one needs to drop down to a lower level language to boost performance. I like to apply the Pareto Principle: 80% in a higher level language, and 20% in a language like C or C++.
Sure, we use our own arena allocators with varying degrees of success in my current project (V8--JavaScript VM implemented in >800KLOC C++). The fact that C++ allows you to peer behind the curtain is more its own admission that it is inadequate to meet all use cases. It always allows one to resort to "manual" control. Usually that manual control comes at the cost of literally violating the language semantics and wandering into the territory of undefined behavior. As tempting as this manual control is, my 15 years of C++ versus my 10 years of Java and JVM implementation makes me feel like C++ causes far more trouble than it's worth, especially for projects that have no business at all trying to outdo the defaults.
I think that's mostly your opinion. One can remain very tight to the language semantics and still get a lot of fine-grained control over performance in C++.
No language can meet all use cases. Every language has features that it is best suited for, and features that are a bit of a pain. For languages like C and C++, the pain points are higher-level programming semantics. For languages like Java / JVM, the pain points are fine-grained control and low-level bit twiddling.
I recommend an 80/20 split. Write 80% of your code in a high-level language of choice that mostly meets your goals. Profile the hell out of it, then use that language's FFI to refactor the slow bits in a lower level language like C/C++.
There will be constraints, such as system level programming or embedded system programming, where a language like C/C++/Rust can't be avoided. But for general purpose work, the Pareto Principle works pretty well for balancing time-to-market with performance.
Along the same lines, the thing that is annoying me now is return value optimization. It's apparently required by the standard in specific circumstances, but there's no (obvious) syntactic indication of whether you are using it.
You can implement allocators which are very fast, much faster than generic malloc and free. Then people that do care about this level of performance can write their own ones, optimised specifically for the platform they are using and problems they encounter.
> C++ community has a strong bias against garbage collection
Well, it's a bit of a selection bias on your part IMO. The people who use C++ have seen the benefits of Java, python, C# and all the rest. And many of them(us) use Java, python, C# and stuff for meta-productivity build tools and the like. GC is great for some enormous set of problem domains. But for problems that cannot endure high or unstable latencies, GC is a bad solution. I'm writing C++ because it hits a sweet spot in the intersection of (low/deterministic latency + ubiquity among developers).
> Why can't you acknowledge that there are problems that have GC as the only and best solution?
There are, but I probably wouldn't have used C++ for them. In some rare cases I would but only because of momentum on an existing legacy solution.
I write C++. I write a lot of it, I've been using it since 1987.
I love LISP and Smalltalk, but I'll never ship a product in those languages. I've written a ton of C# and Java and somewhat less Javascript; those environments are fine (though I found myself paying much closer attention to object lifetime than I wanted).
I'm dead set against garbage collecting C++ because the semantics just don't fit well with the available runtimes. A few times I've written garbage collected heaps for specific types of objects; crossing the boundary between the hand-allocation world of C++ and the dynamic world of a GC environment is not a happy or particularly efficient experience, especially in the presence of threads.
You choose C++ to be able to control performance more than in lots of other languages. In a game engine or other very performance intensive software, people tend to think a lot about memory management. Whether in very rare cases, for a part of a system the optimal solution is something similar to a garbage collector: this might be possible, but quite irrelevant, because the important thing is not the final solution, but the fact that you can (sometimes quite creatively) control and constantly fine tune things, so that you can come up with an optimal solution.
Having been part of the community, but with experience in many other languages another common trait seem to be ignoring that many other languages also have features for writing cache friendly code just like C++, while being safer.
A good example is Modula-3, a systems programing language that while it does have GC by default, also allows for global and stack allocation, value types, bit and structure packing and if really necessary naked pointers (only allowed in unsafe modules).
Or Mesa/Cedar at Xerox PARC with its mix of RC/GC, which is basically the same as C++ _ptr<>() + deferred_ptr<>() just implemented at the language level.
Same applies to RAII, yes it was a very good invention in C++, but many other languages do offer good ways of doing resource management, for example bracket in Haskell or withResource in others.
I think the reason the C++ community has a strong bias against garbage collection as you say is because it has a bias against all sort of unpredictable/unwanted behavior. That's the essence of c++: pay only for what you need/want/use. Garbage collection wouldn't be appropriate in some situations.
So re-implementing your own GC algorithm in the special case where you need it is seen as acceptable. That deffered pointer could be part of the standard library as long as those who don't need to use it don't pay any performance overhead
I don't know if I would acknowledge that there are problems that have GC as the only and best solution. Perhaps there are certain specific memory management problems for which a GC is the best solution. But what if you avoid that entire class of problems by adopting certain programming paradigms, or by the design of your programming language? For instance, Rust ensures memory safety without a GC. And while I write C++, I rarely find myself directly call new -- most of my objects are allocated by STL containers and via stack building.
Inherited mutability and no aliasing mean cyclical structures tend to be a fairly complicated mess of RefCell/Cells. The borrow checker is very good at handling ordered lifetimes but cannot handle more complicated access patterns without the runtime checks in RefCells.
There are graph libraries, but these only help with structures that are graphs in the strict sense (all edges are created equal and not actually part of the structure) as opposed to structs with one or more fields that reference (and have shared ownership of) another node in the graph.
Right, but then you also shouldn't be using the techniques described in the article, right? To put it another way, there's no case where deferred collection with smart pointers is better than real GC.
Because GC wastes cycles on the user's machines instead of just hiring a competent developer who can manage memory properly. As we move to a more mobile world, wasted cycles matter more. So no. While GC is the definite answer to "how do I allow my dog to make a 'press this button for fart sound' app?" Is it most certainly not the definitive answer (though perhaps a passable one) to "how do I write good fast stable software?"
How does a "good programmer" properly manage cyclical graphs without very specific, static guarantees about how they're used (e.g. All cycles are parent/child)? Because I'd love to know a way to do this without using/implementing the equivalent of a GC.
In addition to what adrianratnapala said, it is perfectly fine to have specialized GCs for specific problems, and I suspect that any moderately complex C++ program in fact does (and no, that's not Greenspuring). The problem is using GC as the general memory management solution when 95% of the allocated objects have a single reference or a simple lifetime patter.
A not well known fact is that even the linux kernel as a GC implementation, used to collect file descriptors, but any suggestion to GCing all memory in the linux kernel wouldn't be well received.
In most of the graph problems I have ever dealt with, the graph is constructed piecemeal but destroyed as a whole after you are finished with it.
It is true there are specific problems (e.g git repos) where you want a mutable graph over the long term -- but I have never actually had to deal with those cases, unless it was a special case with a obvious solution (like a doubly-linked list or tree).
Even in the case of a mutable graph, one can use copy-on-write semantics to create a DAG of graph revisions that can be managed or compacted. I believe this was the strategy used by Subversion.
Compaction and collection can both be trivially performed on such a structure by making a deep copy of the latest revision of the graph to a new memory arena, just as is commonly done with a Cheney collector. Old locations are back-patched with pointers to the new locations, which solves the cycle problem.
Well, it's a Cheney GC. To be generational, there would need to be more than two heaps. But, that's hair splitting. Heh.
I doubt one would find many C++ folks who would disagree that GC is useful some of the time. It's all about controlling when and where GC is used, which sort of hits home with Sutter's point.
What are people using cyclical graphs for? Most of my daily work is in a sad language which has GC and structurally prohibits cyclical references[1], so what am I missing out on?
I did. You wanted acknowledgement that there are problems for which GC is best solution in C++. I explained why it is never best and likely never anything beyond "barely acceptable".
Do you understand that the linked article, and the talk referenced in the article describe a way to implement garbage collection as a solution to (memory management for) data structures with cycles?
Do you think that this problem is not equivalent to "how do I allow my dog to make a 'press this button for fart sound' app?"?
Do you have a better solution to reclaiming data structures with cyclic pointers?
I am not doubting that you are right, but this argument does not quite amount to a proof. That is because you would have to show that there are cases where the lifetimes of the graph's elements cannot be determined from consideration of the semantics of the problem.
The answer is that it's technologically impossible to prevent third parties from using your work if you publish it. DRM doesn't solve this problem, but claims to do so. As a result, genuine users suffer from DRM.
There was a post on Reddit recently which is a great example of this. Somebody said that Netflix didn't support their monitor as it was too old (i.e. didn't support HDCP). One of the comments suggested to get a HDCP stripper, a simple device for $10, which will disable the DRM.
Yes, all DRM is easy to bypass right now, but it works as a way to get studios on board with digital distribution. This brings up an interesting point. The main argument against DRM is that it is a slippery slope which will lead to more violations of freedom. But the problem with slippery slope arugments is that they're often unsubstantiated. We often don't know what the long-term effects of something will be.
What if DRM is actually serving the opposite purpose? By appeasing studios with weak protections, it may be preventing stronger digital locks from being developed. It could be that if the FSF and other anti-DRM organizations are effective in removing current standards, the industry will respond by developing something even worse, leading to an ever-stronger DRM arms race.
I'm not saying that I know this will be the result either, just that we don't really know what the effects of defeating standard DRM interfaces will be. The only real solution I can imagine would be to get content distributers not to want DRM, which is a very hard proposition. They have the money and the power, and they won't stop until they get what they want.
That's because DRM isn't and never has been about preventing copying. The intent has always been to transfer power from consumers to the studios and tech manufacturers. It doesn't matter if the DRM can be defeated by some subset of consumers as long as the idea that you don't have the right to us your purchases as you see fit. As long as this erosion of property rights and the doctrine of first sale becomes normalized and you start believing in artificial scarcity, DRM will have served it's purpose.
This is why it's so important to never compromise and accept any form of DRM. Compromise only shifts the Overton window[1] making change harder in the future.
> it may be preventing stronger digital locks from being developed
Even if "stronger digital locks" was the goal, you don't prevent future locks by allowing them today.
> the industry will respond by developing something even worse
They already do that.
> They have the money and the power
So they can use some of that money to develop their own players if they want to push DRM. There isn't any reason browser authors and the public in general should subsidize selfish businesses.
I'm not so sure that haivng a standard way to connect DRM to a browser is changing the Overton Window. It's a technical standard that no users are actually looking at. What percenage of the population would even know the difference between a NPAPI plugin and a HTML5 interface for DRM? If you went on the streets and asked people if they feel less in control of their media because the W3C approved a standard replacement for NPAPI in browsers, would anyone even understand what you're talking about?
There are historical examples where weak DRM became standard and never got replaced. Look at CSS for DVDs. It was broken early on, but nobody bothered to replace it because it was already standard and the hardware was out there for it. Yes, there's different copy protection on Blu-Ray, etc., but a lot of people still use DVDs, and they can easily back them up because of weak encryption.
There's definitely a lot of benefits to creating a culture that values personal control, but I'm just not sure this is working. I want a DRM-free world as much as anyone, but the message is muddled and people just want their Netflix. If Mozilla and the W3C both came out against it, Chrome, Safari, and Edge would still support it, and I think all it would do would make Firefox lose even more market share. I would love to see some evidence that it would come out another way.
Yes. That was my point. The goal is to change public attitudes, not practical enforcement of copyright. This has always been about shifting public discourse.
> It's a technical standard that no users are actually looking at.
Of course users aren't looking at the standard. The shift happened with the technically-minded people that eventually make recommendations to their friends and family. Just look at this very thread where people like you already accept the premise that DRM is anything other than malware that gives control over your hardware to some other party. The fact that you are making arguments that use language such as calling DRM a "digital lock" demonstrates how far the Overton window has already moved.
> If you went on the streets and asked people if they feel less in control of their media because the W3C approved a standard replacement for NPAPI in browsers, would anyone even understand what you're talking about?
You're trying to frame that question to get the answer you want. Of course most people are not familiar with NPAPI. However, if skip the technical jargon and actually ask people about their experiences, you will get very clear answers. I've literally never met anybody that wasn't directly profiting from DRM that thinks crippled video players are fine. Many have mentioned the things they would like to do but can't because of DRM.
> standard replacement for NPAPI
EME is not a replacement for NPAPI. At best it's a replacement for the DRM in Flash.
> weak DRM became standard and never got replaced. Look at CSS for DVDs.
Except it did get replaced - which you admit - in the next version of the hardware (Blu-Ray). The only reason DVD wasn't affected is the large amount of existing hardware. It's simply not possible to update all of the existing hardware players.
However, web browsers are software that updates regularly.
I'm not a good representative of public discourse. I've read Richard Stallman's blog for over fifteen years.
What you're talking about with hardware is exactly my point. We're talking about encryption, which I'm sure you support for individuals. Public Key Encryption is great for when you want to send a secret message to someone you trust to keep it secret. But what if you don't trust them? You have to convince them to trust you to have some control over their system, even if in a jail or a restricted VM. DRM is sender-controlled encryption employed by software.
So what happens if you tell the sender you refuse to run software you don't control and they still don't trust you? Their only other option is to convince you to use hardware they control. So rejecting broadcaster-controlled software might just lead to a demand for more broadcaster-controlled hardware. It's been done for years, but now we're moving from a full hardware solution to a more software-based solution, something you can contain and easily run with whatever restrictions you want.
I'm not saying it's good, but I'm not saying it's definitely not progress either.
I work on the video streaming sector and we get the shivers when a client wants a web application. And if anyone thinks media producers will allow their content to be streamed over a DRM free channel they're either naïve or stupid.
What Google, Netflix and others want is to stop the mess this is currently on browsers.
Exactly that. I have issues watching copy protected dvds on my playstation where if I pause the film for a short while the copy protection kicks in and I can't watch the film, instead have to restart and fast forward to when I'd paused.
That's content protection preventing me - a purchaser - using it properly.
True, it's likely a bug from either the disc or the player but if they weren't attempting DRM I wouldn't have the issue. Inconveniencing legitimate users because you can't implement the protections without it breaking isn't the way to go.
I know I can download a copy of a film, push play/pause and it will just work. I know if I buy a dvd I'll have to sit through unskippable piracy messages and ads and not be sure the film will play after pausing.
Even if it's not the DRM fault here there are plenty of other examples. E.g. You can't copy/backup a DVD on your computer/stick/cloud so once the DVD format is deprecated(i.e. Macs no longer have a DVD-drive) or the DVD is lost you can't play it anymore. Not to mention the convenience. The latest wonder from the DRM promoters is the HDCP: People with 4K TVs can't play 4K TV content anymore because of this new "feature"[0]. Apparently the only sane solution is to hack the HDMI cable.
> DRM doesn't solve this problem, but claims to do so.
If you actually listen to any of the arguments being made on the W3C mailing lists, none of the pro-DRM sides have actually argued such an absolute stance, because they're not stupid and can see DRM regularly getting broken. The argument primarily centres on "casual piracy"—some technically illiterate user sending a copy of "something fun" to their friends—and not on eliminating piracy or preventing third parties from using your work.
Such "casual piracy" is legal in my country, at least when it comes to music (and we pay for the priviledge, unfortunately). Publishers shouldn't mess with my rights.
It sounds like they believe there are more potential sales there than there are from other forms of pirates. (Whether that's true or not is anyone's guess!)
If you're a photographer, I can always make a screenshot, or record the video from my HDMI/DVI cable to the monitor or take a very precise photo of my screen, and I WILL get the photo from your website or app.
There is plenty of evidence that DRM doesn't stop copying: millions of torrents ripped from crunchyroll/hulu/netflix/Blu-Rays, all of those have some sort of DRM, all of them were circumvented. There are people who think that DRM is not designed to stop copying, but it's designed to control how legitimate users consume your product (see: DVD ads).
Edit: Please don't assume that this is the only argument I have, it's just the most obvious argument from the top of my head. There are plenty of people who explain the negative sides of DRM and reasons it doesn't solve the problem you described. They do it in a very eloquent way with rigorous arguments, and I don't believe that I need to repeat those arguments. I'd like you to listen to Cory Doctorow: https://www.youtube.com/watch?v=HUEvRyemKSg
Thanks. It's an iteresting discussion that needs to be had. As I said, I often see things along the lines of 'It's just bad, m'kay' without any reason. Your explanation is reasoned and cogent. Again, Thanks!
Producers don't watch BitTorrent statistics. They send a document asking stuff like: will my product be DRM protected?
If you answer no, then farewell pal, they won't allow their content to be on your platform.
Because it's literally impossible. If you want someone to be able to read your text or view your image, in the end the light has to reach the viewer's eyes, and that means it can be recorded. At best, DRM can be an annoyance. It can never stop unauthorized redistribution of material.
Perfect example from the 80's, an arms race to prevent copying of software, which ended up doing what ?
Software still got copied while increasing the publishers cost.
Now 30 years later, efforts to preserve are stymied by copy protection on failing hardware. In an ironic twist, the protection broken by the pirates is salvageable.
How can you expect anyone without a time machine to explain what it ended up doing?
For example, we live in a world where companies like Adobe or Autodesk can sell software licenses for thousands of dollars. Would that be true if software piracy became the norm decades ago? Would we be better off one way or the other? Who can say?
What if piracy was never invented and we all just paid our dues. What a happy little libertarian utopia.
>Would that be true if software piracy became the norm decades ago?
How many decades ago? I built my first computer and installed pirate Windows and Photoshop versions back in 97. Warcraft had questions you had to answer during installation that were answers from the lore in the manual. Do you think people in the 80s with the first personal computers would see their friend use a new software and then wait 4-6 weeks for their own floppy disk to arrive in the mail?
Not to detract from your argument, but most libertarians support either substantially scaling back or entirely eliminating IP law, including copyright law.
Internet piracy in general seems to be culturally quite left-libertarian.
They didn't anyway. But all of that is irrelevant. The point is that questions like the one I originally responded to are fundamentally unanswerable. Don't get too caught up in the specific example. It could just as easily be "maybe walking across the street on a different day causes RMS to be hit by a bus". Or Microsoft taking a different path delays the Gates foundation from eradicating polio by 30 years.
Because it works in a similar way to general security - it's reactive to the state of the art of those looking to get around it. Once someone has dedicated time to getting around it, those wanting to get around it have a free pass with that content to use it in the ways they want, whereas those who have no intention to are restricted in their use (which is usually more locked down than it needs to be for genuine users, thus more inconvenient).
I liked to see the image as the whole. Had it been made not directly viewable by the publisher I would have had great pain to make it happen. Now I just opened it on another tab.
SnapChat became popular because it restricts what people can do with a post. Its users don't seem to be suffering. So it seems that there is demand for this sort of thing from many users.
We thought long and hard about the type of platform we want to use for this project. While the x86 platform has many shortcomings, it also provides a big ecosystem of OS and SW, as well as support.
The fact that we have two subsystems, secure micro controller being in charge of supplying (and denying) power from the x86 gives us a lot of leverage and protection.
- http://www.ifp.illinois.edu/~jyang29/papers/chap1.pdf
* Image super-resolution: Historical overview and future challenges
- http://www.robots.ox.ac.uk/~vgg/publications/papers/pickup08...
* Machine Learning in Multi-frame Image Super-resolution
- http://www.cs.huji.ac.il/~peleg/papers/icpr90-SuperResolutio...
- http://www.wisdom.weizmann.ac.il/~vision/SingleImageSR.html
- http://chiranjivi.tripod.com/EDITut.html
- http://www.tecnick.com/pagefiles/appunti/iNEDI_tesi_Nicola_A...
- http://www.eurasip.org/Proceedings/Eusipco/Eusipco2009/conte...
- http://bengal.missouri.edu/~kes25c/
* http://bengal.missouri.edu/~kes25c/nnedi3.zip
* http://forum.doom9.org/showthread.php?t=147695
- http://arxiv.org/pdf/1501.00092v2.pdf
* http://waifu2x.udp.jp/
* https://github.com/nagadomi/waifu2x
* http://waifu2x-avisynth.sunnyone.org/
* https://github.com/sunnyone/Waifu2xAvisynth
- http://i-programmer.info/news/192-photography-a-imaging/1010...
* https://github.com/david-gpu/srez
- http://arxiv.org/pdf/1609.04802v1.pdf
- https://github.com/alexjc/neural-enhance