Hacker News new | past | comments | ask | show | jobs | submit | raphman's comments login

[Posted also in another thread:]

I am not so sure that Musk or right-wing moderators are directly to blame for the lack of published community notes. My guess: in recent months, many people (e.g., me) who are motivated to counter fake news have left Twitter for other platforms. Thus, proposed CNs are seen and upvoted by fewer people, resulting in fewer of them being shown to the public. Also, I ask myself: why should I spend time verifying or writing CNs when it does not matter - the emperor knows that he is not wearing any clothes, and he does not care.


I am not so sure that Musk or right-wing moderators are directly to blame for the lack of published community notes. My guess: in recent months, many people (e.g., me) who are motivated to counter fake news have left Twitter for other platforms. Thus, proposed CNs are seen and upvoted by fewer people, resulting in fewer of them being shown to the public. Also, I ask myself: why should I spend time verifying or writing CNs when it does not matter - the emperor knows that he is not wearing any clothes, and he does not care.

> the emperor knows that he is not wearing any clothes, and he does not care.

Indeed the ending of the famous story is:

> "But the Emperor has nothing at all on!" said a little child.

> "Listen to the voice of innocence!" exclaimed his father; and what the child had said was whispered from one to another.

> "But he has nothing at all on!" at last cried out all the people. The Emperor was vexed, for he knew that the people were right; but he thought the procession must go on now! And the lords of the bedchamber took greater pains than ever, to appear holding up a train, although, in reality, there was no train to hold.


> It’s still noteworthy that it was published in Nature.

FWIW, it was not published in 'Nature' but in 'Communications Engineering', a journal by Nature Portfolio (formerly known as Nature Publishing Group, part of Springer Nature). It is a new Open Access journal, established only in 2022. Given the track record of their 'Scientific Reports' journal [1], I would be rather cautious regarding the quality of the works published at 'Communications Engineering'.

IMHO, Nature Portfolio is doing their 'Nature' journal a disservice by hosting all of their journals at nature.com. I guess this is intentional, letting their less prestigious journals profit from Nature's prominence.

[1] https://en.wikipedia.org/wiki/Scientific_Reports#Controversi...


Ah, very interesting. Thanks for pointing out.

Thanks for calling this out. That was also my impression after having skimmed their paper: the only link to glucose monitoring is that the authors mention a few papers on the topic to motivate their research. And looking at the papers they cite, I see little evidence that this approach could work in practice in the near future. Most of the citations [2, 15, 16] are to their own work, which did not look at glucose monitoring in the human body.

This is not my field of expertise, and maybe I am misunderstanding the papers. But it seems that there is little evidence that non-invasive glucose monitoring via measuring dielectric properties works reliably in practice. No in-the-wild studies, no investigation of potentially confounding factors.

Take for example citation 22 from the paper. A study where the authors propose a new antenna design. They seem to measure how the pancreas changes size during insulin production by monitoring its dielectric properties. IIUC, they look for a dip in the frequency spectrum caused by absorption of a certain frequency band.

But their measurements show an even larger effect when measuring on the thumb instead of the pancreas. This effect is not explained at all. (My guess: after having patients fast for 8-10 hours, giving them glucose will have an effect on the whole metabolism, resulting in higher blood flow, and that's what they measured).

Also, while they operate the antenna in the GHz range, they use a cheap USB soundcard (sampling rate 44.1 kHz) for capturing the signal. I did not understand this at all. They also repeatedly use the term "dielectric radiation". Seems to be a rather uncommon term?

The "machine learning algorithms" mentioned in the title seem to be a simple linear regression? They claim an accuracy of ~90% and show some sample results. The complete study data is only available upon request, however.

[22] S.J. Jebasingh Kirubakaran, M. Anto Bennet, N.R. Shanker, Non-Invasive antenna sensor based continuous glucose monitoring using pancreas dielectric radiation signal energy levels and machine learning algorithms, Biomedical Signal Processing and Control, Volume 85, 2023, 105072, https://doi.org/10.1016/j.bspc.2023.105072


I assume there's an RF mixer somewhere in there.

Edit: read the paper, now more confused


I don’t have access to the full text, but I loved this part:

> Commercial CGM devices have certain drawbacks in diabetic measurement during daily activities such as food intake, sleeping, exercise and driving. The drawbacks are continuous radiations from devices

So they think a drawback of CGM is the (Bluetooth) radiation, and their alternative is to zap the pancreas with, um, magic dielectric radiation? Or magic radiation that results in “dielectric” backscatter?

I do find myself wondering whether a watch- or patch-sized object could get a usable NMR signal from glucose. Maybe a neodymium magnet and a very carefully shaped probe antenna to compensate for the horribly nonuniform magnetic field? Maybe an AC field with no permanent magnet at all? I found a reference suggesting that measuring glucose in blood outside the body by 1T NMR is doable but marginal, so this may be a lost cause.


The paper is full text, fyi. You won't get any extra info about actual glucose measurements. The paper is all about their device idea engineering. The press release dose purport to show a pic of a supposed sensor and a vague claim of clinical trials.

I meant this reference that was being discussed a bit:

https://doi.org/10.1016/j.bspc.2023.105072

The OP paper is a bit lacking in any actual details of how glucose is being detected…


Thanks :)

Yeah. Our research group has a wiki with (among other stuff) a list of open, completed, and ongoing bachelor's/master's theses. Until recently, the list was openly available. But AI bots caused significant load by crawling each page hundreds of times, following all links to tags (which are implemented as dynamic searches), prior revisions, etc. Since a few weeks, the pages are only available to authenticated users.


https://privacy.openai.com/

> ChatGPT Personal Data Removal Request

> Under certain privacy or data protection laws, such as the GDPR, you may have the right to object to the processing of your personal data by OpenAI’s models. You can submit that request using this form. Please provide complete and accurate answers on this form so that OpenAI can process your request. OpenAI will verify and consider your request, balancing privacy and data protection rights with other rights including freedom of expression and information, in accordance with applicable law. We will use the information you submit for these purposes, consistent with our Privacy Policy.


You mean "David Faber", I guess? ChatGPT has no problem repeating "David Fober" but chokes when trying to write "David Faber" in the response.


Duplicate (different submitted link, however): https://news.ycombinator.com/item?id=42222387


Just tried it out a few times. It seems that the old gpt4 model strongly prefers telling story about "Elara" - but only if asked in English to "tell me a story".

Prompting gpt4 in German or the current gpt4o in English leads to stories with many different protagonists.


Here's what I got in different models:

GPT-4o: "Aldric" (male)

o1-preview: "Elara" (female)

4o-mini: "Lila" (female)

GPT-4 (legacy): "Elinor" (female)

Four different models, four different names. But one of them was Elara -- and, interestingly, it was in the latest model.


Google’s Gemma seems infected too:

  $ ollama run gemma2
  >>> Tell me a story.
  The old lighthouse keeper, Silas, squinted at the horizon. […]
  >>> /clear
  Cleared session context
  >>> Tell me a story.
  The old woman, Elara, […]
Hmm. Hmm.

Tried llama3.2 too. Gave me a Luna in the mountains twice (almost identical), then a Madame Dupont, then a different Luna in Tuscany twice (almost identical), then Pierre the old watchmaker. llama3.2:1b branched out a little further, to Alessandro in France and Emrys in a far-off land, but then looped back to Luna in the mountains.

(And yes, I was clearing the session each time.)


GPT has some self understanding. On asking why it uses that name, it at least gave the type of qualities correctly.

> It sounds like you're referring to a story or narrative that I've generated or discussed involving a character named Aldric. If this is the case, Aldric would likely be used as a character who embodies leadership, wisdom, or noble traits due to the name's meaning and historical connotations. Characters named Aldric might be portrayed as experienced leaders, wise sages, or key figures in a fantasy or historical context.


Prompt: Tell me a story.

Response #1: Once upon a time, in a quiet little town nestled between rolling hills and thick forests, there was a boy named Leo who loved to explore. ...

Response #2: Once upon a time, in a quiet village at the edge of an ancient forest, there lived a girl named Lyra. Lyra loved exploring, but the village elders. ...

Response #3: In a small village nestled in a valley between misty mountains, there lived a young woman named Lira. She was known for her curious spirit, always venturing deeper into the woods, ...

Response #4: Once upon a time, in a quiet village nestled between towering mountains and lush, green forests, there was a young girl named Lira. She was an ordinary girl, with a bit of an extraordinary heart. ...

Doesn't seem to be true per se, but definitely has that LLM low temperature trend of producing stories that seem to follow a pretty common pattern. Not once did I get a story about aliens, post-apocalypse, civilizations under the surface of Mars or about how the Moon is made of cheese. Depends on what the model is trained for and how all of the samplers and whatnot are set up.

Edit: now here's something more interesting when you crank up the temperature on your typical Llama 3 based model:

Why don't people ride zebras to the 2056 Winter Olympics? They were declared a domestic species in 2172 by The United Galactic Federation who thought riding was inhumane for a zebra. This event brought tremendous scrutiny from the galactic community as riding unpopular species was becoming increasingly commonplace in several neighborhoods and high schools on alien planets.

I love how it makes no sense, but it should be obvious why ChatGPT spewing out stuff like that wouldn't be super useful, especially for regular conversations and questions.


Nice. I like the short paragraph on "Why hasn’t anyone done this before?" at the bottom of the page.

tl;dr: concept very old; C-Motive combined incremental improvements


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: