Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Chinese AI stirs panic at European geoscience society (science.org)
97 points by rguiscard on July 3, 2024 | hide | past | favorite | 92 comments


The geopolitics of LLMs is definitely interesting. China censors politics. The West censors intellectual property. I am not equating the two so please excuse the edgy teen dystopia that follows:

I keep having this vision of a dystopian future where the Star Spangled Banner is sold off to a private equity firm and anyone who wants to play it has to pay a licensing fee. Of course “freedom-hating” countries have banned it long ago as well. The subversive main character is playing a bootleg copy of it in some politically ambiguous country when police break his door down, but the screen cuts to black before we find out was it the copyright mafia or the dictatorship.


>China censors politics. The West censors intellectual property.

How does the West "censor Intellectual Property"?


The West censors politics too, to be fair. Anyone can be "cancelled" for a political opinion, get fired, or face other ramifications. Many people were "ghost banned" or banned during the COVID-19 craziness, and some were even fired. Julian Assange is another well-known example of a political prisoner. Many people on Hacker News or LinkedIn refrain from engaging in politics because they fear getting fired, judged, isolated, or even prosecuted. Thus, each country has its own "nerve" that no one can talk about.


Engaging in politics on LinkedIn is kind of psychotic, sorry.


I agree more with this edit

>Engaging on LinkedIn is kind of psychotic


Which part of that is government censorship? Society deciding they don’t want to hear what you have to say isn’t censorship.


It’s not government censorship. It’s a chilling effect on speech due to fear of retaliation from companies.

It isn’t illegal to call for unionization, but it can be career suicide. There are many things like this. It isn’t so much “society” not wanting to hear what you have to say, but those in power wanting to actively suppress it.

So you don’t need to threaten jail time to affect the same result as government censorship.


First, it compells governments to censor speech. Look at the EU where "hate speech" laws and laws against "disinformation" were introduced and the ongoing battle with chat control. Second, you can't deny the impact being banned from using any monopolistic sites like Facebook, YouTube, Google. Banning everyone from there when you don't agree with them leads to massive echo chambers.

And is it really "society" that "decides" and not just a vocal minority with the right connections to the press?


> it compells governments to censor speech. Look at the EU where "hate speech" laws and laws against "disinformation" were introduced and the ongoing battle with chat control

This is a weird point on which to fault the EU in respect of China.


It's not faulting it, just showing that it is no better in that regard.


Attempting to stop hate speech is no better than jailing citizens for speaking negatively about the current ruling government? I'm going to have to go with a hard disagree on that one.

And no, "but it could be because XYZ" doesn't really fly. Any law "could be" unjust, unless you have specific examples of hate speech laws in Western Europe that are labeled as protecting against hate speech but are actually being used for nothing more than jailing political rivals, they're not the same.


> First, it compells governments to censor speech. Look at the EU where "hate speech" laws and laws against "disinformation" were introduced and the ongoing battle with chat control.

Countries both inside and outside the EU have been passing hate speech laws from since before the EU was even a thing. You call that censorship but we know that allowing hate to be spewed unconditionally is itself a form of silencing others. You can just look at the US, how’s the unfettered hate and disinformation working out over there?

https://en.wikipedia.org/wiki/Paradox_of_tolerance

And chat control has nothing to do with hate speech.

> Second, you can't deny the impact being banned from using any monopolistic sites like Facebook, YouTube, Google. Banning everyone from there when you don't agree with them leads to massive echo chambers.

Who is banning those websites? You mean China, which has nothing to do with the EU? Or like banning Tik Tok, which the US is at the forefront of?


>...but we know that allowing hate to be spewed unconditionally is itself a form of silencing others.

This is not clear to me. Furthermore, this statement assumes that there is a single and - more importantly - static definition of what is "hate speech", which is plainly not the case.


> This is not clear to me.

It is not clear to you that when a group has such contempt for another they threaten and attack them verbally and physically, that the latter will be afraid to speak freely?

Find a couple of gay or black adults in the US or Western Europe and ask them about events they’ve personally been through.

> Furthermore, this statement assumes that there is a single and - more importantly - static definition of what is "hate speech"

No, it does not. That was not part of the claim at all.


You are doing the exact thing that makes this discussion impossible. You are conflating speech (sounds that come out of my mouth) with physical violence (broadly illegal). The malleability of the definition of "hate speech" is the implied premise in every one of these debates, because without it your arguments make no sense. You even included a helpful hint in your original post.

>You can just look at the US, how’s the unfettered hate and disinformation working out over there?

You are drawing a comparison here that goes something like "the US doesn't have draconian hate speech laws like some countries in Europe, and as a result there are political outcomes I don't like".

It's working out fine, by the way.


> You call that censorship but we know that allowing hate to be spewed unconditionally is itself a form of silencing others.

How is that? And who are we?


> How is that?

Does it need explaining that when someone regularly harasses and threatens someone else (or a group), the target won’t feel safe and will refrain from speaking freely out of fear? Has everyone already forgotten the harassment, bomb threats, and doxxing from GamerGate, to name a single example?

https://en.wikipedia.org/wiki/Gamergate_(harassment_campaign...

It takes but a sliver of empathy to understand that can negatively affect someone. I expected the previous wikipedia link to have been enough to make the concept clear intellectually, but evidently I was wrong.

> And who are we?

The world. The United Nations. Anyone who has been at the receiving end of hate speech. Their friends. People who don’t live in a closed safe bubble, unaware of the problems of others.

https://news.un.org/en/story/2023/01/1132597

This is not hard to search for.


Censorship is different from social mores. Every group of people has "typical behaviors considered permissable", codified or not. No avoiding that.

Censorship is not that.


But again, there's plenty of actual censorship in the west. Maybe not the United States, but the UK, Canada, Australia, Germany... all definitely have it, with documented cases.


censorship where people tell you to f*ck off because they don't want to hang out with you and censorship where the government, with the monopoly of violence, takes you and/or your family away for torture or hard labour are very different things.


The west absolutely has the latter. You can be arrested for offensive speech, offensive dolls, historical revisionism, offensive fliers, etc etc. Just like Russia, just like China (with admittedly lighter sentences).

If you have trouble wrapping your head around this, it's because political speech isn't just speech you find personally OK. It's all political speech.


Sensing a motte and bailey here where supposedly China takes you and/or your family away for torture or hard labour for posting online, and when you look into it, it's more like your post gets deleted and maybe your account gets banned.


now do it in Russia. or be Uighur in China. or maybe talk about Tienanmen?


I thought a fun site might be "UK or Russia", where I mention people getting arrested for things, and people try and figure out if it's the UK or Russia.

People would absolutely struggle, in both countries people are arrested for similar actions - though Russian sentences seem to be harsher.

EG, a quick search finds me a story where someone was arrested for destroying a bible and someone else was arrested for destroying a quran. One in Russia, one in the UK.

"A man from [Redacted] has been released on bail after allegedly posting on social media a video that ends with him burning a copy of the Qu'ran."

"Three teenagers from the [Redacted] have been detained after they filmed themselves burning the Bible on a charcoal grill."


Is burning a book "speech" now? New Speak gets more and more confusing.


It would come under the broader concept of freedom of expression.

I didn't know you had a specialized interested in vocal freedom of expressions only!


Why, so you can make more shit up? Lol


Sorry, my bad. I thought you were arguing in good faith.


It's very clear you are not.


only to you, it seems


Twitter files revealed direct censorship recently


I mean the west in a big place, but if you take somewhere like the UK there's a wealth of examples for people getting arrested for political opinions. IE it's not just "informal", but "formal" as well.


Can you provide some example please?

All I could found is this and this is no way "political opinion" https://www.cbsnews.com/news/uk-man-jailed-over-facebook-sta...


Count Dankula

https://news.sky.com/story/essex-pub-that-had-golliwog-dolls...

UK pub raided by police to seize offensive dolls

https://metro.co.uk/2022/11/02/met-police-officers-jailed-fo...

UK police arrested for offensive jokes in whatsapp group chat

https://www.itv.com/news/calendar/2024-03-01/man-jailed-over...

UK man arrested for offensive stickers


First link says

> No one has been arrested or charged

and you were claiming arrest

Second link: Police officers joking about raping, again no arrest for political opinion

Third link: racist and antisemitic stickers. Again, nothing political. Well if your idea of politics is blind hate then the discussion from my side is over.

You might say UK has censorship and no absolute free speech. That's true, but this is still miles away from prosecuting political opinions. I am not from UK but I am perfectly comfortable with prosecuting rape threats and hate.


There are people in China who find wanting democracy just as abhorent as you find hate speech. It's a different value system. I'm sure they'd make similar arguments.

Again, political speech isn't just speech you happen to approve of.


Nice whataboutism. I don't really care about random Chinese person opinion on democracy. I want civilized society around me. If you really believe rape threats are somehow political speech you can fuck right off. I am generally against violence but if you try to justify this to my face you would get slapped.

I am fine with people voicing ideas ranging from communism to white supremacism as long as there are no calls for hate and violence.


if you try to justify this to my face you would get slapped.

I don't believe you :)


In the context of LLMs, I’m talking about refusals. Ask chatGPT to draw Micky Mouse. What happens? Intellectual property rights kick in and you can’t see it.

As a Chinese chatbot what happened June 3, 1989 in Tiananmen Square and you will also get some kind of refusal. Some kind of principle of social stability has kicked in.

I’m really just making a trite observation about the way LLMs reflect our laws/values.


> Some kind of principle of social stability has kicked in.

Same goes in finance. For example, SIFI (Systemically important financial institution) are exempt from regulation and even bankruptcy. It is human nature to mange/avoid risk, and the system will always want to maintain itself. It is kind pointless point finger to say oh this company/nation/regime censor this. Everyone does it in their own ways.


I don't know if I'd call it censorship, but unlike real property IP effectively requires restrictions on what speech every other member of society can engage in to give the IP owner a monopoly on that speech.


China officially does not care about declarations of Intellectual Property from the West (ianal); by extension, they do not censor it. In the US and Commonwealth countries and the EU etc.. copyright material in publications, in various flavors, are behind a paywall in some circumstances.. Elsivier Science publications, for example.. others.. so by extension the prohibition on reproduction and copying is "censorship"


Y'all, I'm as against our boneheaded copyright regime as anyone, but that ain't what the word "censorship" means.


I'll say a "state granted monopoly" on distribution is pretty close to censorship.

People are not Libres to distribute or use some bunch of text because the judicial arm of the government will prosecute them.


> China officially does not care about declarations of Intellectual Property from the West

Cool. Now try appropriating a Standing Committee member’s intellectual property.


I think equating censorship and intellectual property is not a good comparison. Copyright laws do not restrict sharing of ideas or opinions just specific textual instances of those opinions. Under copyright, you are free to paraphrase or quote the text to share the core idea. Political censorship prevents you from communicating specific political views, which limits dissent. I don’t see how copyright does that.


That’s a fair point. I think “censorship” was really a poor word choice. I should have used “refusal” to emphasize that this is from an LLM.

It’s really a sign of my poor writing that the ensuing thread is arguing about something other than my main point, which was really just a simple observation about how refusals can tell us something about laws and values of a society.


> China censors politics. The West censors intellectual property.

The West also teaches its citizens to use language in very interesting ways. Someone unfamiliar with our customs might interpret your phrase to mean that the West doesn't censor politics.

In my opinion subverting the base means of communication is by far the most clever and powerful of the three techniques.


Epilogue:

In a dark room filled with hooded folks. One of them is laughing that people are focussing on 'icon' (Star Sprangled Banner) than real concrete thing.


I remember pleas and slogans of "data wants to be free!" and a generalized public attitude of down with all copyrights and patents. Remember Napster? I find it hard to not think the attitude now is merely the public echoing what "big journalism" is telling them to parrot. I'm so disillusioned.


Most of the issues that people have with it are a lack of attribution and using this freely scraped data to turn a profit.


I'm sure none of these people use freely available internet data as part of their jobs (i.e., for a profit), right?

More specific to this situation, is there really much profit in geology chatbots?


Well the Ai companies certainly believe data wants to be free. To them, at least. Probably not to anyone else


I’m mostly with you, I am still of that camp and I hope if we have some amazing AI tooling coming that makes humanity more efficient that we don’t get stopped by copyright of all things. That said I am irked when the scraping doesn’t respect copyright but then effort is made to protect the IP of the resulting models.


Earlier in the press:

Geoscience AI in crisis? (17 June 2024) https://geoscientist.online/sections/viewpoint/geoscience-ai...

    Paul Cleverley raises concerns about big-data artificial intelligence projects in the geosciences
Geologists raise concerns over possible censorship and bias in Chinese chatbot (24 June 2024) https://www.theguardian.com/technology/article/2024/jun/24/g...

    Tests on Qwen, part of GeoGPT’s underlying AI, reveal geoscience-related questions can produce answers that appear to be influenced by narratives set by the Chinese Communist party.

    For example, when asked how many people have died in a mining operation in Ghana run by the Shaanxi Mining Company, Qwen says: “I’m unable to provide current or specific information about events, including mining accidents, as my knowledge is based on data up until 2021 and I don’t have real-time access to news updates.”

    The same question posed to ChatGPT, the chatbot developed by the US company OpenAI, produces the answer: “The Shaanxi Mining Company in Ghana has experienced multiple fatal incidents, resulting in a total of 61 deaths since 2013. This includes a significant explosion in January 2019 that alone claimed 16 lives. ”
Eight days ago on HN: https://news.ycombinator.com/item?id=40773876 (1 comment | 2 points)


Is it any surprise that models reflect their RLHF training, and thus cultural and governmental underpinnings...? Just look at the old Gemini cultural flubs for a western-centric example. It will be very difficult (impossible?) to remove underlying bias from an AI model, since you'd be hard pressed to get any two people "aligned", let alone entire communities, states, countries & the world.


It is not a surprise models reflect their source training data and RLHF. The surprise is the DDE project choosing to use a Chinese government censored model (based on Qwen) for a world-wide audience.


Are miner deaths relevant to a geology-focused tool? The example of its use was to explain the fossil record.


Not relevant to surface geochemistry results, no.

Not relevant to airbone geophysics results, no.

Not relevant to to drill exploration sampling results, no.

Not relevant to Technical | Economic Feasibility studies, no.

Not relevant to mining environmental impact studies, no; although deaths, injury and illness in surrounding populations is.

Relevant to ongoing minesite safety records and operations, yes.

It's a question of scope; a geoscience aware AI that can answer questions about resource availibity, resource extraction and associated risks, costs and impacts that includes the kind of information found in, say, https://www.spglobal.com/marketintelligence/en/campaigns/met... would be expected to aware of mining deaths.

Just as people want to be aware of nuclear risks for power and deaths in lithium processing from accidents and associated radioactive waste.

It's being touted as a Geoscience tool, not just a chatty geology database.


Social geoscience (impact of geology on society) is very much core to geoscience and geoethics. Ironically, UNESCO and IUGS authored the paper below calling for a change in how we perceive geoscience..that was cited by the DDE project.

https://www.escubed.org/journals/earth-science-systems-and-s...


It’s a bit unfair. I think a proper comparison would be whether the geoGPT gives answers for other companies/countries screw up. This makes it sound like a conspiracy when it could be just poor bot.


It's an odd example, to be sure, part of the reason I pulled that from the Guardian article.

It's really pitting two general knowledge AI DB's against each other on mining knowledge, it's unclear whether this is a smoking gun on censorship or incomplete data entry.

It does highlight a knowledge void though.


The Deep-time Digital Earth (DDE) GeoGPT uses Alibaba’s Qwen LLM. It is no conspiracy, the fact is that Chinese AI has to abide by Chinese Censorship Law’s.

It does give answers on other companies and countries.

Check out Leonard Lin’s excellent independent technical evaluation of Qwen’s Chinese government censorship.

https://huggingface.co/blog/leonardlin/chinese-llm-censorshi...

As Lin points out, Qwen is Chinese Government censored so why bother using it in a global context when there are equally as good models (if not better) that are not aligned - so won’t issue Chinese government propaganda or refuse to answer.

It seems odd that the International Union of Geological Sciences (IUGS) which is non-political and non-governmental, has allowed DDE to build such a system for the world’s community of geologists with such Chinese government bias and censorship. Maybe they did not realise what was being developed using their name.


propaganda laden gossip article. I don't get why people write in this light..

You can read it from this sentence: " A year earlier, Irina Artemieva, who is Russian born but left the country decades ago, had taken over as president."

'but left the country decades ago' - what a useless piece of detail showing what is the intent of really writing this. If it was just an interesting/honest note to give about the person, the word 'but' would not be used..


Put the model in the public domain and nobody would complain. If you gatekeep, people will find a reason to complain.


The “panic” is mostly from “a group of publishers, led by Phoebe McMellon, CEO of GeoScienceWorld”. They sure as hell will complain if you threaten their business model of charging ~$50 a pop to read a paper, public domain or not.


Wasn't it the same with other AIs? Like with saw that with Gemini, copilot etc. where they was not willing to report on certain stuff or provide the response properly.


Don’t understand why they didn’t use an open source models to develop the chat bot? While the questions about copyrights and permissions remain unsolved, there is agreeable more transparency and no state control. Why did they have to use a propriety Chinese model? Just because the programmer is Chinese and they didn’t have any alternative?


> Why did they have to use a propriety Chinese model? Just because the programmer is Chinese and they didn’t have any alternative?

???

> It is being developed by Jian Wang, chief technology officer of e-commerce giant Alibaba. Built on Qwen, Alibaba’s own chatbot...

https://github.com/QwenLM/Qwen2

https://huggingface.co/Qwen


QWen is an open source model. And for a while it was the top non-English open source model.

Meanwhile, it is also a Chinese model.


Does anybody know if there are published papers explaining this? I found these two which might be related but not actually about the same thing:

GeoGPT: Understanding and Processing Geospatial Tasks through An Autonomous GPT (https://arxiv.org/abs/2307.07930): This does not involve fine-tuning but tool use and prompting for reasoning.

BB-GeoGPT: A Framework for Learning a Large Language Model for Geographic Information Science (https://www.researchgate.net/publication/381630694_BB-GeoGPT...): This involves fine-tuning but of smaller models (Llama-2-7b) - the BB stands for "baby".


Huh, the article describes more of a power struggle inside the EGU than the AI itself.


I mean, overstated levels of intelligence aside, who thought it was more appropriate to model the geosciences in natural language vs, oh, I don't know, physics and chemistry?


publicly funded research, but behind paywalls, was scraped to build the chatbot - by “china” not open ai, causes “people” to lose their s**t.

i do think ip infringement is not cool in general - but it doesnt seem right that geo research is private property.


I also think this is going to bifurcate scientific research. Communities that are willing to run AI over their knowledge base are going to develop a big advantage over those who don't.

I have a friend who applys research to businesses as a consultant. One of his biggest challenges is how to index all the papers and work out what is relevant to a particular topic. I don't know if the current generation of bots are up to the challenge but sooner or later ProfessorGPT will be perfect for that niche. Then journals that force human's to manually research through large numbers of papers will be massive albatrosses that hamper scientific progress.


> Communities that are willing to run AI over their knowledge base are going to develop a big advantage over those who don't

This is debatable.

I've seen countless "AI on knowledge base" projects and all have been on a whole not that much better than just using ElasticSearch. Some aspects are better e.g. discovery but some aspects are worse e.g. accuracy, speed when you are looking for something specific.

I would argue that simply having a knowledge graph in front that can provide related papers for a topic would accomplish the goals better.


> Communities that are willing to run AI over their knowledge base are going to develop a big advantage over those who don't.

I have a hard time seeing this. If you're an academic or an industrial researcher, the hard part of the literature review isn't finding the relevant papers, it's digesting them--and in some fields (e.g., chemistry), replicating their results. If you're more an industry person trying to apply academic research, well in general, you probably want a good textbook synthesis of the field rather than trying to understand stuff from research papers.

From your second paragraph, it seems to me that you're thinking AI will help with the textbook synthesis step, but this is the sort of thing that as far as I can tell, current LLMs are just fundamentally bad at. To use a concrete example, I have been off-and-on poking at research into simplex presolving, and one of the things you quickly find is that just about everybody has their own definition of the "standard model", so to mix and match different papers, you have to start by recasting everything into a single model. And capturing the nuance of "these papers use the same symbols to mean completely different things" isn't a strong point of LLMs.


> If you're more an industry person trying to apply academic research, well in general, you probably want a good textbook synthesis of the field rather than trying to understand stuff from research papers.

That sentence there is what will probably be the wedge point that gives LLM-heavy communities an advantage. As LLMs improve, the question becomes "why shouldn't industry people apply academic research directly?".

> ... as far as I can tell, current LLMs are just ...

We're in the upswing of a new technology, it wasn't that long ago that interesting progress was a monthly or weekly occurrence. I'm not to phased about where we might be right now. Alibaba are one of the companies with every chance of pushing the state of the art forward and regardless of that that state is going to get pushed by someone.


To make an analogy, right now using a LLM filter to read the literature is like reading Scientific American or New Scientist - fun, interesting, entertaining and not always right on the detail.

Let's say, for example, you wanted to build your own cutting edge LLM - would you just ask an LLM on how to do so? Or would you need to do more, and would a simple literature/internet search be just as effective as a starting point?

Note that in my experience - when you are a world expert in some tiny area ( like when doing a PhD ), you realize that quite a large proportion ( ~50% ) of the stuff published in the area you really know about is either wrong in whole or part, and another good proportion doesn't really move the field on.

So back to the original question - how did OpenAI get a lead in LLM - the story I heard was they talked to leading academic's about who were the best people in the field and tried to hire them all.

ie to paraphrase Richard Feymann on the Emperors nose question - you don't really find out the true answer by averaging over loads of ill-informed opinions - much better to carefully examine the nose/data source yourself.


I wouldn't go so far as a sibling commenter and so that most academic research is irreproducible bullshit. But academic research does tend to be chewing-gum-and-baling-wire products that are meant to hold together just long enough to get the necessary results. The rate-limiting step of turning academic research into useful products is "let's flip through all the academic research to find interesting papers," it's "figure out how to make this very-barely-works academic product usable on anything other than the exact things they did for the results section."

And, to be blunt, I have never seen anyone pitch an AI project to do that. AI pitches, even today, are almost invariably solving problems that are already decently solved (search is essentially a solved problem). And most of their proponents have shown no willingness to the practitioners telling them what the actual problems they need better solutions for.


Industry people (usually) shouldn't apply academic research directly because the majority of peer-reviewed published papers are irreproducible bullshit. Of course there is an occasional jewel in the muck so industry people with the skill (or luck) to identify those can get a jump on their competitors.


Industry would not gotten to this stage in LLMs without academic. Your ignorance is not an excuse for spouting bullshit.


Good. Information deserves to be free.


Copyrights are toast with AI. Consider a corporation with integrated legal defense teams powered by AI agents. They will chew up resources until people give up. Think patent trolls on acid.

We seriously need to rethink IP ownership. It's likely an AI trained on a certain amount of IP might be useful in and of itself, perhaps earning a living being good at that particular thing, and maybe that is what will let us ease restrictions of use in other places.

I'm just spitballing here. I could be completely wrong.


They’d just be blackballed by the courts.

It’s already trivial for law firms to generate mountains of bullshit, and seeing the output of some firms first hand, I’d even call it the primary business model for a few of them.

But if they don’t stick within certain bounds of sanity, they get sanctioned by the court or can even be disbarred. Judges really don’t like references to non-existent precedence, or making arguments based on laws that don’t actually exist.

And opposing council, unlike random ignorant member of the public, has both the incentive and resources to identify bullshit like that.

Decreasing the cost to generate bullshit while removing any human checks on it may sound interesting, and might even work for a short period of time, but is going to meet the brick wall of reality pretty quickly. It’s already started to do so in many cases.


> Copyrights are toast with AI.

Copyrights are toast for regular people who are up against companies laundering their work through AIs.

Copyrights for the megacorps are a different story. If you've ever wanted to see a clash of titans, wait until Microsoft's lawyers meet Disney's.


Microsoft has the cash on hand to buy a controlling stake in Disney. Disney is about 5-6% of Microsoft considering market cap. We’d need an Apple v Microsoft battle that’s important enough for boards to demand full litigation.


> Microsoft has the cash on hand to buy a controlling stake in Disney

This is interesting, because it’s the M&A equivalent of people who think they can spend a bad candidate to elected office. I don’t know what the technical equivalent is, bigger-team fallacy?

TL; DR money can buy power, but with diminishing marginal returns. Were Microsoft to try and control Disney, it would likely break first.


You think Disney has the wherewithal to win a fight against Microsoft?


There won't be a clash of lawyers. They'll come up with a settlement and cut will be paid, or they'll just put up special blocker just for them, like they already do with Tim Burton for image generation. Regular people will be essentially defenseless of course.


Given what existing models can do right now, that moment might come sooner than you think.


Although Hazen understands the wariness about China’s involvement, “I keep my eyes open all the time,” he says. “I sense no agenda whatsoever.”

I’m sure they’re spending municipal money, cutting deals with Springer, and working with Alibaba out of the goodness of their hearts because they all care so much about humanity’s understanding of geoscience.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: