Hacker News new | past | comments | ask | show | jobs | submit | princeofwands's comments login

Predictive modeling will be nearly automated (except in cases where manual feature engineering helps).

Data science will focus more attention on solution finding, data gathering, cleaning, ETL, and business.


There is a psychological effect when spending money with credit cards. Spending physical cash hurts one the most, then debit cards, and credit cards cause the least hurt. It is very easy to overspend with credit cards, and leave the tab for future you to worry about.

Depending on how susceptible you are to such psychological tricks (and cashback is also a trick to make you spend more or hook you to the brand -- it is only offered because it is EV profitable to the bank), your advice may be neutralized or net negative for a large part of the population that lacks your strong self-discipline. One late fee and you wipe out all your profits (and your credit score).

Smart and risk-free to only spend what you've got ("pay as you go"), besides you help avoid a national credit crunch.

> Debt robs a man of his self-respect, and makes him almost despise himself. Grunting and groaning and working for what he has eaten up or worn out, and now when he is called upon to pay up, he has nothing to show for his money; this is properly termed “working for a dead horse.”

> It is all very well to say, “I have got trusted for sixty days, and if I don't have the money the creditor will think nothing about it.” There is no class of people in the world, who have such good memories as creditors. When the sixty days run out, you will have to pay. If you do not pay, you will break your promise, and probably resort to a falsehood. You may make some excuse or get in debt elsewhere to pay it, but that only involves you the deeper.


You have a point, but pay them off every month and you have a good credit rating for a (strictly defined) emergency.


It was a Cold War battle between Russia and the US. RAND Corporation found in 1973 that the US was falling behind in paranormal research.

> (1) Soviet research is much more oriented toward biological and physical investigation of paranormal phenomena than is U.S. research, which is dominated by psychologists;

> (2) although visible U.S. and Soviet level of effort appear roughly equal, over forty years of research in the United States have failed to significantly advance our understanding of paranormal phenomena;

> (3) if paranormal phenomena exist, the thrust of Soviet research papers appears more likely to lead to explanation, control and application than is U.S. research;

The paranormal arms race between East and the West may have started with a 1960 French article describing how experiments at Duke University had established telepathic communication with nuclear submarines using Zener cards. The success rate was stated at 75%. The Navy later stated the story was a hoax, but it was likely deliberately planted by Western intelligence agencies to detract from real technological advances in communicating with submarines (such as Very Low Frequency Radio). But the Russians seemed to take the bait, wasting resources, yet later started reporting successes and publishing a wide range of high quality research (of which the CIA became aware). [1]

This in turn scared the USA into keeping up. Meanwhile the Russians promoted their own hoaxes and disinformation, such as Nina Kulagina, who was seemingly capable of telekinesis.

By the way, hypnosis and mass hypnosis are not woo woo, but legit toolsets of the intelligence agencies. Even the Stargate project had its use as a creative tool for scenario development and intuitive thinking. Interesting to note that its participants Harold Puthoff, Edwin May, Ingo Swann, and Pat Price were all involved with Scientology. Even the government may at one time been interested in the supposed powers of the OTO's.

> As scientists we should not be pre-disposed to shutdown things we do not understand without putting said phenomena through a testing phase.

Which is why Dr. Estabrooks (who the Russians knew was funded a big budget by US military to conduct research into the paranormal) wanted to know for sure if it was possible to hypnotize someone into committing murder and forget it ever happened. As real scientists such a test would be quickly shut down on moral grounds, so perhaps it is better to speak of military research when discussing this woo woo.

Given how disinformation and wasting the academic resources of foreign enemies ("eating carrots makes British pilots see at night", or the suggestion to drop the bandit problem over occupied France) is such a common trick, I wonder what current tricks are in use. I suspect that any country which focuses a lot of attention on fairness in AI will be at a disadvantage over a country that does not seem to care about this and just keeps automating no matter privacy or discrimination costs.

[1] https://www.wired.com/images_blogs/dangerroom/files/SovParap...


I'm with you on disinformation around AI—that is surely happening—but I disagree on the target. Intelligible AI does more than merely prevent accidental discrimination, it allows humans to iterate more quickly on their approach then cross train models that lack the intelligibility requirement, thus exposing a closing gap between what is possible without this restraint whilst simultaneously speeding up human understanding of AI approaches and algorithms.

No, with AI the thing that sounds like total bullshit to me is the way that some government officials talk of the risks AI in frenetic terms, while speaking as if the risk will become apparent once AI is capable of contemplating abstract concepts and cognition. The reason I think this is bullshit is that I consider those preconditions 20 years away at the very earliest[0] and the real threat is present: Humans harnessing AI is already powerful enough, especially for state actors.

[0] Likely 100 years away or creating so much thermal waste that their true hazard is mitigated.


1.) A super-intelligent mind could predict what it would do. Then we can make it tell us.

2.) Here you conflate "intelligence" with biological power structures (energy resources, territorial plotting). That is like asking where aircraft go to the toilet.


Yes. Foursquare sells your location data to other companies. These companies then know: Where you live, where you work, how you commute, if you use drugs, if you drink a lot, if you are gay, if you are religious, your favorite food, an estimate of your income, if you work out, ... Depending on the company and jurisdiction, they can then use this data to decline you their service or place you inside an advertisement bubble.

Edit: I do not understand the downvotes on this. You could help me form a better view on this, by replying or stating what is wrong. I am talking Enterprise Access to Foursquare data.


> If you use drugs

> If you are gay/religious

Uh, neither of these can be inferred definitively from location data.


You do not need 100% accuracy.

Easy data mining: "Religious center" is a top-level category. "Religious school" is a type of "School". "Marijuana Dispensary" is a type of "Shop & Service". "Gay Bar" is a type of "Bar".

More evolved: Find location patterns that correlate with known gay or religious people (for instance by cross-referencing data sources).

https://developer.foursquare.com/docs/resources/categories


Also scary considering that if Airbnb does not like you, for whatever valid or invalid reason, you are not welcome in either local hosting or hotels worldwide. Given that Airbnb already declined hosting people they considered attending a lawful political rally (the "hateful nazi's of Charlottesville"), their increased reach concerns me.


You could still stay at any hotel worldwide. Just not using hotelstonight to book. Although I doubt any bans on Airbnb would extend to hotelstonight because liability for hotel damage does not extend to booking agents.

Whereas, with Airbnb, they cover any damages that guests may do or issues that may arise from the hosts as well.


It feels like we're heading towards (or already living in) China's social credit score system but administered by corporations.


Do you know they banned using your horn here in Shanghai and replaced all the mopeds with electric ones over night, the city is less polluted and I can sleep in this cheap AirBnb. Pretty good dictatorship in this regard. While we argue they are winning.


[flagged]


Corporations shouldn't be the judge and jury. Yesterday they banned nazis, tomorrow anti-vaxers and the next thing you know you can't find a bed cause you used the wrong gender pronoun. Slippery slope, very concerning.

Check Joe Rogan's recent show with Jack Dorsey and Vijaya Gadda for an endless list of such examples.

Having to police "social" platforms is a bigger challenge than solving any technical problem and the insistence that it be done is slowly tearing those platforms apart.


The wise man bowed his head solemnly and spoke: "theres actually zero difference between good & bad things. you imbecile. you fucking moron"


No. I did and they clearly are policing people making threats and deliberately winding people up. If you don’t like the rules don’t use the platform.


Either people agree with you or they're Nazis? Maybe you should rethink why you are getting downvotes. It may be more due to people thinking that companies acting as gate keepers or moral compasses can very much be a slippery slope and you simply saying it's not does not change that people think that.


Downvotes simply silence a post. If someone doesn't agree, they can rebut, but that is not what a downvote is.


In this case, its a slap on the wrist for a comment that doesn't positively contribute.


I fail to see how calling out that Nazis are unequivocally evil is not positively contributing when someone is trying to gain sympathy for being discriminated against for being a Nazi. I bet you would also say that it’s “not the time” to have these sorts of discussions. Too bad, I’m not going to be quiet about this. It’s deeply sad to me that this community is willing to quickly rally around Nazis and shame people who speak against them.


It is because you're ignoring what the comment is actually about. Airbnb declines hosting of people they deem unworthy, because they have the wrong political ideology or spoke out about the wrong thing.

Who is to say that they dont expand this? Can I still go to demonstrations and airbnb or will that be banned as well? Can I still criticize the president and airbnb? What about beeing anti-war or pro-war?


I’m sorry, this is a dumb argument. I will not interface with people who think Nazis are a slippery slope. That’s bullshit and you know it. This is a bad faith argument


Denying people their right to gather peacefully at a lawful political rally, simply because you posit them as irredeemably evil, is a hallmark of fascism. And even though I do not agree with your blatant Nazi mindset, I will fight for your right to state your backwards views, and attend lawful political rallies of your choosing, without a multinational company stifling your activism by denying you lodging after they snooped on your online behavior.

The same sympathy that is lacking for anti-immigrant activists, is on display in your posts. Can you think of a single legit purpose of anti-immigrant activists? If not, how are you not anything but creating division and misunderstanding?


The one thing we need to be intolerant about is intolerance.


It is not the ML model that is updated with information, but the predictive modeler herself is updated. She now finds parameters that make the model perform well on that specific test set. This gives you overly optimistic estimates of generalization performance (thus unsound science, and, in business, it is better to report too low performance, than too high, because a policy build on a model that is overfit like this can ruin a company or a life). For smarter approaches to this problem, see the research on reusable holdout sets.


To use the test set explicitly for evaluation is a deadly sin. When found out, you'd face serious damage to your reputation (Like Baidu did a few years ago). [1] What the decreasing performance results on the remade CIFAR-10 test set shows, is probably more akin to a subtle form of overfitting (Due to these datasets being around for a long time, the good results get published, and the bad results discarded, leading to a feedback loop). [2] It is also possible the original test set was closer in distribution to the train set, than the remade one. The ranks stay too consistent for test set evaluation cheating.

I also think the "do not trust saliency maps" is too strongly worded. The authors of that paper used adversarial techniques to change the saliency maps. Not just random noise or slight variation, but carefully crafted noise to attack saliency feature importance maps.

> For example, while it would be nice to have a CNN identify a spot on an MRI image as a malignant cancer-causing tumor, these results should not be trusted if they are based on fragile interpretation methods.

Interpretation methods are as fragile as the deep learning model itself, which is susceptible to adversarial images too. If you allow for scenario's with adversarial images, not only should you not trust the interpretation methods, but also the predictions themselves, destroying any pragmatic value left. It is hard to imagine a realistic threat scenario where MRI's are altered by an adversary, _before_ they are fed into a CNN. When such a scenario is realistic, all bets are off. It is much like blaming Google Chrome exposing passwords during an evil maid attack (when someone has access to your computer, they can do all sorts of nasty stuff, it is nearly impossible to guard against this). [3]

[1] https://www.technologyreview.com/s/538111/why-and-how-baidu-...

[2] http://hunch.net/?p=22

[3] https://www.theguardian.com/technology/2013/aug/07/google-ch...

EDIT: meta(I liked the article. I do not want to argue it is wrong. It is difficult for me to start a thread without finding the one or two things to nitpick at, or to expand upon a point, but this article was already very resourceful)


As one of the authors of the "Interpretation of Neural Networks is Fragile" paper, I would agree with you.

To a certain extent, saliency maps can be perturbed even with random noise, but the more dramatic attacks (and certainly the targeted attacks, in which we move the saliency map from one region of the image to a specified another region of the image) require carefully-crafted adversarial perturbations.


To use the test set explicitly for evaluation is a deadly sin

I’ve seen tons of papers doing that and getting published, especially on cifar10. Not saying it’s a good practice, just that it’s fairly common.


>"It is hard to imagine a realistic threat scenario where MRI's are altered by an adversary, _before_ they are fed into a CNN."

what about when people in the hospital who have a patient that they suspect has cancer use the best machine to create that patients scans and tend to push patients that they think are ok to the older less good instrument? Or if they choose to utilise time on the best instrument for children?

What about when the MRI's done at night are done by one technician who uses a slightly different process from the technicians who created the MRI data set?

At the very least there is a significant risk of systematic error being introduced by these kind of bias, and as you say, it's really hard to guard against this, but if a classifier that I produce is used where this happens and people die... Well, whatever I feel I would be responsible.


The teams specialize.


The other commenter was describing it as a “safe place” where you can talk about complex CS topics without confusing people. Unless your teams never talk to other teams, that doesn’t fit.


Current application: Better character - and token completion helps make it easier for people with physical disabilities to interact with computers. http://www.inference.org.uk/djw30/papers/uist2000.pdf (pdf)

Research progress: Better compression measures the progress to general intelligence. http://mattmahoney.net/dc/rationale.html

Future application: Meaningful completion of questions, leading to personalized learning material for students all over the world. If only there was an OpenQuora.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: