I know, but why should this have to be a tack-on. These are things that can drive anyone crazy and have for the last six versions of OS X. They're willing to completely overhaul interfaces elsewhere (iTunes, iOS), why not add some unambiguously useful features here?
Article L.3421-4 of the "Code de la Santé Publique" (law concerning public health) forbids incitation/encouragement of the usage or traffic of classified narcotics, even if it does not lead to anyone actually using them. The sentence is heaviest if it's done near schools. Isn't there something vaguely equivalent in US law?
The interesting part is that the act of presenting classified narcotics under a favorable light (in press for example) is punishable under the same article.
In some regards this is pretty vague, and I've seen government-funded studies that end up presenting cannabis "in a favorable light", so I'm not sure how much this is enforced.
"Isn't there something vaguely equivalent in US law?"
Nope. The closest would probably be plans to make nuclear bombs. Oh, and the stupid thing about crypto. If I was bored enough right now I'd paste some long crypto key, but it would just make this thread look shitty.
You'd think this would be obvious. It's an inefficient and poorly designed "bytecode":
- The only number representation available is floating point.
- Concurrency is impossible except for the nearly useless web workers API; this prevents a language author from exposing any other type of concurrency mechanism.
- Anyone wishing to implement the "byte code" specification must implement or integrate a full Javascript parser and runtime.
What's wrong with Apple trying to control which binaries it will and won't let you run on its OS? Doesn't Apple always have your best interests at heart?
It should be obvious, but to make it even clearer.
- Centralization of control. Apple has frequently jacked around with iOS programs being allowed/disallowed; why should we let some central authority control what we have on our computing devices?
- Centralization of malware. Monocultures are subject to waves of viruses.
- Limitations. The more interesting your app, the more places it needs to touch. A "fun" limitation I noticed this morning is that Mail.app's sandbox poses significant limitations to GPGMail. That's not good- hopefully I can continue to have encrypted email at will with Mail.app. Fortunately, the open source world provides encrypting email clients.
- Should Apple know what I have installed? App stores give them that knowledge. Is there a right that App stores take away?
Obviously, I have no great faith in Apple, Microsoft, Google, Facebook, or the other centralization advocates. I don't see that I should.
"It should be obvious", followed by a semi-paranoid rant just makes you sound like a wild-eyed conspiracy nut. "Monocultures are subject to waves of viruses." Whoo! Actual waves! That is scary. And buying from the App Store lets Apple know what you have bought. I never thought of that, that is really scary too. But the following non-sequitor just baffles me.
I think he was driving at the idea that people who make malware tend to go for larger ecosystems.
The reality is that everything you do on a network involves some form of risk. You can mitigate these risks by performing tasks in a standardized way using only approved software, but a packaged Zero-Day that's tuned for your environment will generally succeed.
Getting a Kaspersky Payload isn't that hard to find any more; preventing hackers from knowing what anti-virus you're running is your responsibility.
In short, everything is about risk mitigation. Running the same software as everyone else exposes you to the same risk.
By the way, this point is tangential to the larger point at hand which is: Apple doesn't care about its developers.
I don't see how any of what you said is relevant. The distribution mechanism (Mac App Store) has absolutely nothing to do with everyone running the same software. And in fact the required sandboxing (one of the alleged problems with the Mac App Store) goes a long way to mitigate a lot of risks in remote exploits.
If the Mac App Store had a grand total of 5 apps then I could see where you're coming from, but it launched with over a thousand apps and it's had 1.5 years since then to acquire many more. There's no monoculture.
Do you have any evidence for your claim that malware would be worse due to centralization?
Apple has a very good record on malware via both the mac and iOS app store, best I can tell.
I completely get why a dev would hate the app store, but form an average consumer standpoint - it seems brilliant. Unless you are scared of Apple finding out that you installed "Evernote" or "Twitter". OMG!
I use Intellij (and the derivative editors RubyMine and AppCode) as well.
IMO the main advantage of vim is it's instant. No waiting for the IDE to load. IJ takes quite a while to load and you have to have your project set up.
Another major advantage of vim is it's available everywhere. SSH to your production server and edit just like you do your code.
But I have to say: I much prefer coding in the Jetbrains editors. They offer a UX that can't be matched by vim. They are just smarter, esp for statically typed language. But even dynamic - they are damn good. And you can use VIM key bindings if you like.
Java is still widely used. I'm looking forward to this -- there are some projects we can't take to Scala due to client concerns around a less mainstream language.
> Java is still widely used. I'm looking forward to this -- there are some projects we can't take to Scala due to client concerns around a less mainstream language.
I hope I am wrong but I smell a trap here. Some time ago I implemented a solution (for .Net, that is) that employed some more modern techniques ( reflection, recursive lambdas, etc). Unfortunately many of the programmers in charge of maintaining my code were either interns from a local technical school or an outsourced company in India. They couldn't get their heads around the techniques I used. They thrashed my code, I ended with a bad reputation and lost the client. Not something I regret, anyway.
I smell trolling, however considering this anecdote is genuine:
(1) interns should never be allowed to commit code in the master branch without a code review from a senior developer
(2) there are many competent developers in India, or around the world, so if your company needs to outsource, it can do so by being a little careful about who they employ
(3) if the above 2 are not possible, simply look for another job, because life is too short
I can give many examples where I've handed code over and later there was problems due to less-skilled developers.
One good example I have is being called by a client who had a bug they couldn't fix in some code I had written years before. After looking into it, I found something like this:
// ABC 1/1/2001 removed this call as we don't understand why it was being called
// x = performImportantCalculationABC();
There are lots of terrible programmers out there. The average level of programming skills on HN is considerably higher than you'll find at most companies.
I've seen so much bad code written by people with senior job titles; confusing, ugly, inflexible, but working code.
You're forgetting that the smarter programmers realise all they need to do is fix something really fast, so their managers are like, "oh my goodness, you're excellent!" but if you get caught up on the details, spend several weeks "fixing" something that your manager won't even see or understand, you're a shit programmer.
So really, you gotta get your priorities straight.
If your priority is "ingratiate yourself to your manager" then yeah, it's best to pick small superficial problems or prefer patch-job fixes. These decisions tend to be informed by how much technical debt the "short" fix makes.
But if a problem genuinely will take 6 weeks to solve, then it's probably a worthwhile problem to solve. Those sorts of problems tend to have far-reaching implications for sites and their ability to scale. The big question is when to solve it. Competent engineers know when to do this.
Managers who can't see this? Bad managers. Bad managers can kill a startup just as effectively as 'rockstar' engineers.
No, I was not trolling. The key to understand the whole thing is that it was not a software company, but a construction company and I was there as a consultant.
For them, software is a cost, not a source of revenue. That's why they didn't implement rigorous practices such as code reviews. That's why they'd prefer the cheaper against the better coders. That's why I already followed your 3rd advice.
For the same reason any other large enterprise does...so many special cases in their organically grown 'systems' that it would be cheaper to get something custom coded than try to shoehorn their process into an off the shelf environment.
Funny, I've had it the other way: I was working on a project as a senior dev and had interns writing Godless functional code using Guava and Functional Java, and having a grand old time seeing how much concurrent code they could fit into a single statement.
The fact is that Java is not a functional language and to bolt on functional pieces and then ignore the imperative, OO nature of Java is stupid. Don't get me wrong, I love functional programming, but Java is not and will never be a functional programming language, and developers that hate on programmers who use a language in the manner in which it was intended to be used are the real fools.
It is a poor programmer indeed that writes pathological code (and functional code in Java is pathological) and then blames his team for not understanding it.
There are significant lessons to be learned from functional programming, the least of which is the value of immutability in making complex systems comprehensible.
Immutability is almost completely necessary for easy-to-understand concurrency, and extremely helpful in designing easy-to-understand APIs and operations on in-memory-models.
It is a poor programmer indeed that discards the lessons of FP and instead writes purely imperative OO. Your interns may have been writing bad code, but it wasn't the "functional" part that was the problem.
>>It is a poor programmer indeed that discards the lessons of FP and instead writes purely imperative OO. Your interns may have been writing bad code, but it wasn't the "functional" part that was the problem.
Functional code is not Idiomatic Java code, 99.99% of Java code, practices and projects do not have it. Now when you go and bolt functional code on top of it, you are putting a tough puzzle on top of the code.
Don't be surprised if people cannot understand such code. In fact its bad programming to write non idiomatic code in a language. It confuses people, puts wrong patterns and paradigms and places where they don't belong. The language syntax struggles to support such semantics and often the code looks like a difficult puzzle in itself. Such code is very difficult to maintain and saps your energy is focusing on the wrong issues.
It's bad code to write idiomatic mutable imperative Java. I'm not willing to write worse code just because bad engineers can't get their head around basic concepts that are integral to reliability, composability, maintainabiliy, and correctness.
Maintaining 'idiomatic' Java written by an 'average' developer is far more difficult and disheartening than maintaining well-written Java based on lessons from FP.
In this case you shouldn't use Java at all. Because if your use case requires functional programming then you must use a functional programming language. Force fitting a non native paradigm to a tool and then calling those who don't understand it as bad is not fair.
I'm good != Others are bad.
>>Maintaining 'idiomatic' Java written by an 'average' developer is far more difficult and disheartening than maintaining well-written Java based on lessons from FP.
More than 90% of the Java projects I know are chosen to written in Java because its easy, cheaper, quicker to hire the 'average developer'. And not because of the merits of Java itself. These days 'Knows to use eclipse' == 'Knows Java', even if all the programmer knows is basic syntax, its assumed he can code in Java using intellisense and code complete.
Most of the Java code written today is tool generated, any way.
The benefits from immutability are well-known outside the functional world. Josh Bloch harped on that (and rightly so) quite a lot in "Effective Java".
Immutability in Java is great, and doesn't have a functional corequisite.
Perhaps they didn't approve of your use of reflection? Your post sounds a bit like you would use it because you can create awesome looking code. It should be used rather seldom
Are you kidding? Reflection is one of the most powerful parts of the .NET framework. Hell, the page you link to contains many examples of the different possibilities, but they're just a start. The only knock I know of against reflection is that it's a performance hit. If that's really the issue, write the code using reflection (it's going to be better code, most likely), and then see if performance is an issue. If it is, refactor out the reflection.
Reflection makes things possible that simply aren't possible without it. It can get rid of layers upon layers of complexity. It can shrink thousands of lines of code to fractions of that.
There is no reason to say that 'it should be used seldom.' It's a powerful tool and should be wielded along with every other tool in the toolbox.
Yeah, I don't know about that. When I first discovered reflection, I was using it for everything. Initially it was awesome but then I've quickly realised that it's exactly the kind of thing that makes code a lot more complicated than it should be, especially with the deadly reflection + expression trees (runtime code generation) combo. It's really a dark path that you should stay away from. Basically, unless your problem involves having to get metadata about .NET assemblies, reflection is not the right solution. Code that's clever for the sake of cleverness is cool and entertaining but has no place in production.
>When I first discovered reflection, I was using it for everything.
Well there's your problem. You have to use the right tool for the job. Reflection shouldn't be used for everything, and I wasn't suggesting that was the case. My point was that it shouldn't be avoided. It's damn helpful in a lot of different places, and to say outright that you're just not going to use it would be depriving yourself of a powerful toolset.
Sometimes I find it a littl bit interesting that .NET community typically lag behind Java community in general (be careful interpreting general vs all).
In Java, the best known book that can be considered comparable with Jon Skeet book is written by Josh Bloch of SUN, Google, and API design fame. He literally suggested to avoid Reflection and use other features unless you are in the business of writing IDE, compiler, code analysis, etc.
I hvae done Reflection in the past and fow not to do that stupid move again and will spend more time figuring other solution. It is not about I cannot grasp Reflection but the code tend to be less elegant, less readable, and not compiled safe.
I would use dynamic languages if I found myself using Reflection a lot.
It seems to me that many well-known "best" practices need to be considered with more context.
In an environment with a large, frequently changing team, heavy use of features that provide extra abstraction may result in less-skilled team members being confused. Most enterprise environments consider that normal. Most startups would fire a programmer who couldn't grasp reflection.
I think you are mistaken. For 99%+ of business level code which wires up pre-built J2EE components and builds domain models, then you simply don't need them.
Even in C# for example, I've seen huge systems written in .Net 4.0 without a single lambda being required.
In your sense, you don't need them for 100% of the code.
Lambdas are NEVER required, as tons of other syntactic features are never required.
They are nice to have though, because they make the code more concise and reduce bugs but reducing boilerplate and repetition. Working without them when you have them at your disposal is idiotic.
That huge .Net 4.0 systems have been written without a single lambda probably speaks more about the programmers (not experienced enough?) than about lambdas.
I don't think that they reduce bugs at all - that's a bit of a wired assertion to make.
Its not about the programmers but the architecture. To be honest, the primary use case for lambda expressions in .net is LINQ and possibly container configuration. If you use NHibernate and do not expose IEnumerable anywhere (which is required across network boundaries and encapsulation boundaries) then you just don't see it anywhere.
> To be honest, the primary use case for lambda expressions in .net is LINQ
Uhm, yes, if you're incapable of using delegates and writing functions that take delegates as parameters, then you'll only use lambda expressions where other people have written such functions, for example in LINQ or the generic collections in the framework.
But if you can use delegates, and if you can use lambda expressions, they're a wonderful tool to make your code more readable, and that in turn reduces bugs.
If you're considering Action<T> and the numerous Func<T> implementations, then I disagree mostly.
They decrease efferent coupling but do not decrease bugs.
There is no direct correlation between readability and bugs from my experience. Some of the least buggy code I've seen is messy and likewise the most buggy can be the most readable. There is not enough correlation to draw that conclusion.
The metric that is important is the skill of the programmer who fulfilled the specification and their ability to understand the task fully and their ability to translate that understanding to the language at hand and know where and when things will go snap.
Things that DO increase bugs:
1. Crap programmers.
2. Coupling - log increase in side effects of a change.
3. Bad test coverage.
4. Poor design up front.
5. The killer: bad specifications (not even a code issue!).
Using more advanced language features does not necessarily improve things.
0. APIs that don't leverage the compiler to enforce correctness and catch bugs at compile time.
Designing APIs in such a way often requires FP language features. You can often model equivalent APIs without FP features, but they'll be so verbose as to be unusable.
I think the implication here is that if they require fewer lines of code, they are going to reduce the number of bugs.
Regarding LINQ: is there any .NET app out there that isn't using IEnumerable at some point? I can't recall every writing an app that didn't use it, unless it was an extremely simple app. Besides that, Lambdas are very useful in event subscription.
4 hand grenades waiting to go off in that expression. Try and spot them.
We have a 670kloc platform that doesn't have a single one in it (it's still .Net 2.0). 450 domain objects, 1000+ NH criteria queries and about 650 aspx pages...
I don't see any pitfalls that wouldn't arise with imperative techniques. Well, Single() throws if there is no or more than one item with the property name "Blah!" in the collection, but if you just want the first one or none you should use FirstOrDefault() anyway. The point is that you need to write roughly a dozen lines with more room for hand grenades to reproduce that functionality with imperative constructs:
Thing blah;
bool found = false;
foreach (var item in collection) {
if (item.Property.Name == "Blah!") {
if (!found) {
found = true;
blah = item;
} else {
throw new InvalidOperationException("collection contains more than one \"Blah!\" item!");
}
}
}
if (!found)
throw new InvalidOperationException("collection does not contain a \"Blah\" item");
Come on, that's ridiculously verbose and doesn't solve any problems of the lambda version. If you are concerned about nulls, then you have to insert != null expressions anyway. And you can easily refactor by introducing an additional right above the old one,
while inserting the null checks into the imperative code takes considerably more effort (locating the predicate, making a decision weather or not to turn the predicate into a 120 char wide monster or adding another if block etc.)
You miss one important point, which I've learned from many years of using .Net:
Which dereference in your LINQ expression is causing the NullReferenceException?
Try solving that problem when it goes pop in production and you have a couple of million quid in flight!
I'd write it as follows (probably wrapped as a generic function of T:
public Thing GetItemWithName(ICollection<Thing> collection)
{
Thing output;
int found;
// preconditions
Check.IsNotNull(collection, "collection was null");
foreach(var item in collection)
{
// ref checks
Check.IsNotNull(item, "item was null");
Check.IsNotNull(item.Property, "item.Property was null");
Check.IsNotNullOrWhiteSpace(item.Property.Name, "item.Property.Name was null or whitespace");
if (string.Compare(item.Property.Name, "Blah!", StringComparison.InvariantCultureIgnoreCase) != 0)
continue;
found++;
// rule check
Check.IsTrue(found <= 1, "Found more than expected single element in collection");
output = item;
}
// postconditions
Check.IsNotNull(output, "output null. Expected reference");
return thing;
}
Note: I tend to write proper industrial grade stuff that has to work every time without fail or any edge cases or conditions. These conditions are prescribed up front. Zero bugs and fail early is the only acceptable outcome which is why this is verbose.
Well, if the error reporting of Enumerable.Single() isn't good enough for you (i think it throws InvalidOperationExceptions with different messages for both error cases), nothing stops you from implementing it once for yourself and profit from the benefits everytime instead of writing the same looping constructs interlaced with error reporting again and again.
public static T MySingle<T>(
this IEnumerable<T> collection,
Func<T,bool> pred,
string foundNoneMsg,
string notUniqueMsg) {
Check.IsNotNull(collection, "collection was null");
var filtered = items.Where(item => pred(item));
Check.IsTrue(filtered.Any(), foundNoneMsg);
Check.IsFalse(filtered.Skip(1).Any(), notUniqueMsg);
return filtered.Single();
}
bool HasName(this Thing item, string name) {
Check.IsNotNull(item, "item was null");
Check.IsNotNull(item.Property, "item.Property was null");
Check.IsNotNullOrWhiteSpace(item.Property.Name, "item.Property.Name was null");
Check.IsNotNullOrWhiteSpace(name, "name was null or whitespace");
return item.Property.Name
.CompareTo(name, StringComparison.CultureInvariantIgnoreCase);
}
collection.MySingle(
item => item.HasName("Blah!"),
foundNoneMsg: "collection has no item with Property.Name 'Blah!'
notUniqueMsg: "collection contained more than one 'Blah!');