Software salaries have been higher than hardware salaries since well before covid. So that part of your theory is bunk.
Had it occurred to you some people repeat stuff because they just want other people to believe the things they think? That's kind of how discussion works. It isn't a measure of truth to see the same line repeated at all. That's a fallacy.
The COVID part I was replying to GGP's claim that the central bank doesn't print money.
Generally QE has started since 2008.
> Had it occurred to you some people repeat stuff because they just want other people to believe the things they think?
Yes, that's a possibility, not a definite. Why did you think that hasn't occurred to me? I'm not the one saying "I don't know why people keep saying this."
I specifically offered an explanation that involves the GGP possibly being wrong: "Does it occur to you that maybe people keep saying this because that's the truth?" Note the "maybe". It's not rhetoric. I meant what I wrote.
Of course there are other explanations but I'm pretty sure their view on what central banks are wrong or at least out of date. And I explained and gave pointers. I still wrote the "maybe".
Anyway, given your definite claim that people just repeat stuff because they just want other people to think the same, I guess discussion with you becomes meaningless at this point.
Finally I'll just note that quoting my counter example to explain what central banks do as evidence that I got my original claims wrong isn't really a argument either way.
Yea, I'd imagine comparing a 9 year old version of a language to another would be "mind melting". And you are comparing to Java 8, from before Java re-organized their JEP process to ship features far faster, with a different release philosophy and feature roadmap.
Kotlin is worth picking up, but after seeing the speed at which Kotlin moves compared to how Java moves now, I don't think Kotlin will keep up long term.
That's weird, as my understanding of the YT creator community is constant strife, ever changing rules, arbitrary bans, strikes on their account for vaguely matching copyrighted work, and most recently swearing too much, or too early, in a video.
YT is not friendly to creators, for the same reason Twitch is not friendly to creators: advertisers hate what people want to watch.
Even Patreon, where people pay rather than watch ads, repeatedly runs into issues with creators because of pay schedules, amounts, percent cuts, platform features, and more.
If its someone else's platform, creators don't win.
Sensor quality in phones goes down, AI makes up for it because good sensors are expensive, but compute time in the cloud on Samsung owned servers is cheap. You take a picture on a crappy camera, and Samsung uses AI to "fix" everything. It knows what stop signs, roadways, busses, cars, stop lights, and more should look like, and so it just uses AI to replace all the textures.
Samsung sells what's on the image to advertisers and more with the hallucinated data. People can't tell the difference and don't know. They "just want a good looking picture". People further use AI to alter images for virtual likes on Tiktok and Insta.
This faked data, submitted by users as "real pics in real places" is further used to train AI models that all seem to think objects further away have greater detail, clarity, and cleanliness than they should.
You look at a picture of a park you took, years before, and could have sworn the flowers were more pink, and not as red. You are assured, by your friend who knows it all, that people's memories are fallible; hallucinating details, colors, objects, sizes, and more. The image, your friend assures you further? "Advanced tech captured its pure form perfectly".
And thus, everyone will demand more clarity, precision, details, and color where their eyes don't remember seeing.
You got a friend, spouse or someone close that has hundreds of pictures of you on their phone. Their phone has a "AI chip" that is used to finetune the recognition models and photo models with your AI library. Like Google Photos tags images of people you know, so does the model. It also helps sharpen images - you moved your head in an image and it was a bit blurry, but the model just fixed it, because like the original model had for the moon, it has hundreds of pictures of you to compensate.
One day, that person witnesses a robbery. They try and take a photo of the robber, but the algorithm determines it was you on the photo and fixes it up to apply your face. Congratulations, you are now a robber.
For the long time digital cameras embedded in EXIF metadata about conditions on which the photo was made. Like camera model, focal length, exposure time etc
Nowadays this metadata should be extended with description of AI postprocessing operations.
> Nowadays this metadata should be extended with description of AI postprocessing operations.
Of course. But to ensure that's valid for multiple purposes we need a secure boot chain, and the infrastructure for it.
To get there we need an AI arms race. People trying to detect AI art with machine learning vs. increasing AI sophistication. Companies trying to discourage AI leaks of company secrets and reduce liability (and reduce the tragic cost of mistakes of course) vs. employees being human.
Or we could have built a responsible and reasonable government that can debate and implement that.
Maybe I'm naive. I'll take responsibility for that.
In the meantime, it's playtime for the AIs. Bring your fucking poo bags, theyre shitting everywhere (1), pack it in, pack it out.
(1) what the world didnt know, was that this was beautiful too.
> Of course. But to ensure that's valid for multiple purposes we need a secure boot chain, and the infrastructure for it.
>
> To get there we need an AI arms race. People trying to detect AI art with machine learning vs. increasing AI sophistication.
Or we can just recognize the lunacy of it and opt out of caring. You can't stop the flood, so you just learn to live with it. With the right view, the flood becomes unimportant.
Secure boot in practice always become slave boot. The user loses control to even control the operating system running on his device. It is the final nail in the coffin for the already dying concept of general purpose computing.
What measures can the government implement to combat this? AI image modification is realistically possible even on consumer hardware running locally. There is no going back.
Photos taken by cell phone cameras increasingly can't be trusted as evidence of the state of something. Let's say you take a picture of a car that just hit a pedestrian and is driving away.
Pre-AI, your picture might be a bit blurry, but say, it's discernible that one of the headlights had a chunk taken out of it; it's only a few pixels, but there's obviously some damage, like a hole from a rock or a pellet gun. Police find a suspect, see the car, note damage to the headlight that looks very close, get a warrant for records from the suspect, find incriminating texts or whatnot, and boom, person goes to jail for killing someone (assuming this isn't the US, where people almost never go to jail for assault, manslaughter, or homicide with a car) because the judge or jury are shown photos from the scene, taken by detectives in the street of the person's driveway, and then from evidence techs nice and close-up.
Post-"AI" bullshit, the AI sees what looks like a car headlight, assumes the few-pixels damage is dust on the sensor/lens or noise, and "fixes" the image, removing it and turning it into a perfect-looking headlight.
Or, how about the inverse? A defense attorney can now argue that a cell phone camera photo can't be relied upon as evidence because of all the manipulation that goes on. That backpack in a photo someone takes as a mugger runs away? Maybe the phone's algorithm thought a glint of light was a logo and extrapolated it into the shape of a popular athletic brand's logo.
I’d just like them to fix the problem where license plates are completely unreadable by most consumer cameras at night. It’s almost as though they are intentionally bad. (The plate ends up as a blown out white rectangle.)
The recent kyle rittenhouse trial had an element that hinged on whether apple's current image upscaling algorithm uses AI, and hence whether what you could see in the picture was at all reliable. The court system is already aware of and capable of dealing with these eventualities.
“Aware of” does not necessarily mean “capable of dealing with”. Forensics is generally bad science, yet gets admitted into court all the time. This occurs despite many legal textbooks, papers, and court opinions highlighting the deficiencies.
The question was more general, if the iPad zooming introduced any different pixels (e.g. a purple pixel between red and blue). Or, "uncharged pickles" as the judge put it.
It doesn't even need AI to be problematic. Pinch-zoom has no business being used in the courtroom as it inherently can introduce issues. However, a fixed integer ratio blowup of the image shouldn't be problematic. (2:1 is fine. 1.9:1 inherently can't guarantee it doesn't introduce artifacts.)
I thought it was really funny in the 1980s that people in medical imaging were really afraid to introduce image compression like JPEG because the artifacts might affect the interpretation of images but today I see article after article about neural image enhancement and it seems almost no concern that a system like that would be great at hallucinating both normal tissue and tumors.
So far as law and justice goes it is the other way around too. If it is known to be possible that cameras can hallucinate your identity, it won't be possible to use photographic proof to hold people to account.
It seems fairly easy to bake a chain of custody into your images. Sensor outputs a signed raw image, AI outputs a different signed “touched up” image. We can afford to keep both in this hypothetical future; use whichever one you want.
Once generative AI really takes off we will need some system for unambiguously proving where an image/video came from; the solution is quite obvious in this case and many have sketched it already.
The images generated by SLR and mirrorless cameras are already signed with device embedded keys during EXIF embedding. Every manufacturer sells such verification systems to law enforcement or other institutions to verify such images.
Sometimes there are exploits which extract these keys from the cameras themselves, but I don't hear them nowadays.
And the answer is “spam filters and AI personal curation agents will drop any image without chain of custody from every feed that claims to be about reality”.
In a world where any image or video can be generated, chain of custody to a real-world ground-truth will be vitally important.
I think anything analog is going to be suspect; you can take a photo of a digital image and it could look like a real analog image.
Absent a chain of custody (perhaps including GPS baked into the signed image data blob), I think analog artifacts will become untrustable. Unless you can physically date them to pre-generative era!
So now not only are there AI-imagined details in your images but those details are also different depending on which device the image is viewed on. Lovely.
It’s a fair point but with high enough resolution (and perhaps GPS baked into the trusted data) I suspect it would be very hard to actually forge a digital image from an analog source.
Likewise depth fields and other potential forms of sensor augmentation.
I feel like with ScarJo lately you either get "big budget movie where she's phoning it in" or "small movie most people won't watch where she's a great actress. Does this fall into either of those categories?
Obviously someone who has good enough position to take semi-clear photo and who knows you so well, that has phone full of your face, will not recognize you directly, but will be convinced that you are robber after looking at photo. At this point we can go full HN and assume that you will be convinced anyway, because judge is GPT-based bot.
This "future" is present in current Pixel lineup btw. Photos are tagged as unblured, so for now you can still safely take a selfie with your friends.
imagine you want to "scan" a document using camera app like many people do, and ai sees blurry numbers and fixes then for you.
when will you notice that some numbers even that look clear are different than on original document?
I think this is so we'll known case that Samsung wouldn't make such trivial mistake and if ai detects you are photographing a document it'll disable ai magic automatically, but imagine scenario eg. where you are trying to photograph some phone number from some banner, would ai also guess it shouldn't mess with numbers there? imagine someone with poor eyesight wants to read this way something from a sign that is far away
and I would bet that super zoom would be attractive feature for person with such sight issues
AI-based image generation is surely already good enough that a single digital photo can't count as evidence alone. But your scenario doesn't make much sense to me - are you suggesting AI will have reached a point it's stored and trained on images of almost everyone's faces, to the point it could accurately/undetectably substitute a blurry face with the detailed version of an actual individual's face it happens to think is similar?
I'd be far more worried about deliberate attempts to construct fake evidence - it seems inevitable that eventually we'll have technology to cheaply construct high-quality video and audio media that by current standards of evidence could incriminate almost anyone the framer wanted to.
Look similar to a celebrity? Your face gets replaced, because the number of photos of the celebrity in the corpus outweighs photos of you. And when those doctored photos end up in the corpus, weighting will be even further towards the celebrity So people who look less like the celebrity get replaced, because it is almost certainly them according to the AI. Feeding back until everyone gets replaced by a celebrity face. And then the popular celebrities faces start replacing the less well known celebrities. And we end up with true anonymity, with everyone's face being replaced by John Malkovich.
However, their omnipresent surveillance data will show that eight years, seven months, and thirteen days earlier you cut off the DA's third cousin while driving on the freeway, so the DA will conveniently forget to present this alibi as evidence.
AI isn't the thing to be worried about. People with power abusing AI is the thing to be worried about.
Whether zooming in on an image on iPad adds "extra" details was already a contentious discussion during Kyle Rittenhouse trial. The judge ultimately threw that particular piece of evidence out, as the prosecution could not prove that zooming in does not alter the image.
One day, that person witnesses a robbery. They try and take a photo of the robber, but the algorithm determines it was you on the photo and fixes it up to apply your face. Congratulations, you are now a robber.
Sounds like pretty standard forensic science, like bite marks and fingerprints.
Basically this.. As "neat" as AI "improvement" is, I don't think it has any actual value, I can't come up with any use-case where I can accept it. "Make pictures look good by just hallucinating stuff" is one of the harder ones to explain, but you did it well..
Another thing, pictures for proof and documentation, maybe not when they're taken but after the fact, for historical reasons, or forensics.. We can't have every picture automatically compromised as soon as it's taken. (Yes, I know that photoshop is a thing, but that's a very deliberate action, which I believe it should be)
I think the main use case is "I'm a crummy photographer and all I want is something to remind me that I was there" and "Look at my cat. Look! Look at her!"
That's me. I'm a lousy photographer, as evidenced by all of the photos I shot back when film actually recorded what you pointed it at. My photography has been vastly improved by AI. It hasn't yet reached the point of "No, you idiot, don't take a picture of that. Go left. Left! Ya know what, I'm just gonna make something up," but it should.
I imagine there will remain a use case for people who can actually compose good shots. For the remaining 99% of us, we'll use "Send the camera on vacation and stay home; it's cheaper and produces better pictures" mode.
As a kid I was taking a photo in a tourist spot with a film camera and standard 50mm lens. An elderly local guy grabbed me by the shoulder as I framed the photo. We shared no common language and he (not so gently) pulled me over to where I should stand to get the better shot.
That would actually be a useful feature, I'm aiming the camera but based on what makes "good professional" photos, it suggests "move to the left so you frame the picture well" or "those two people should be more spread out so its not one person with two heads" etc, kindof like lane warnings on cars.
You don't need AI for taking better photos, for most people the phone just automatically taking a burst/video and picking a frame out for the still or stacking frames would be plenty. Lots of photos suck because of shit lighting. A camera intelligently stacking frames would fix a lot of people's photos.
"I'm a crummy photographer and all I want is something to remind me that I was there"
This is fine, and I can take good shots but at the same time? I only care about this level of shot most of the time too!
But then instead of a 20MP image, which:
* takes more space, and ergo, more flash drive space
* more space to store, to backup, to send
* is made 20MP by inserting fake data
Why not have a 2MP image, which is real, and let people's end-use device "fix" it? Because all that post processing can be done when 2x or 4x the view size, too!
Because advertising.
And that's sad. We'd rather think we have a better pic, and destroy the original.
And the space thing is real. Because, that same pic gets stored in gmail with 20 people, backed up, kept in all the devices, and so on!
And the LOL of it all, is that I bet when it is uploaded to facebook... it gets downsized!
edit: in fact, my email app allows me to resize on email, so I downsize that too! Oh, those poor electrons.
This is like Huffman compression on your photos but the AI companies created an exabyte size dictionary and now you can store your photo in a few kilobytes.
The key to all modern "AI" bullshit (ChatGPT, StableDiffusion etc) is that they're just exquisite lossy compressors, where a good-enough perceptual loss function has been learned automatically.
There's nothing wrong with that until people start assuming they're lossless (like Huffman encoding) which they definitely are not. Unfortunately the general public doesn't understand the difference.
I'm a decent photographer and still use my phone for this. It's good enough, and can even skip the AI stuff if I want to. Or even better: I can keep the AI stuff in the raw and edit its impact on the final photo later.
Interestingly enough one of the reason Sonys flagships perform really badly in comparisons is because they are weak at computational photography. So even when the sensor is great it looks too real, which people don't like.
Yeah I've a phone with a great camera, nature shots are great, but people don't like themselves in these photos. When pressed they talk about the defects on skin and theets and eye position... Their phone beauty filters created in their mind a fake mental image of themselves and they dissociate from their real images.
It's weird. My mom brand fidelity is because Huawei specific algorithm is part of her self.
How about using AI for sensor fusion when you have images from multiple different kinds of lenses (like most smartphones today)? I was under the impression this was the main reason why AI techniques became popular in smartphone cameras to begin with
I'm not aware of much fusion happening between different lenses (although I saw an article using that for better portrait mode bokeh), but AI is used to stack multiple images from the same sensor. You can do de-noise, HDR and other stacking stuff with clever code, but AI just makes it better.
Good for situations where you aren’t expecting or care about realism in this detail. AI hallucinations will be amazing for entertainment, especially games.
I want game content generation by AI, like for dungeon generation in an ARPG - it likely won’t be as good as hand crafted level by a developer but it should be more interesting than the current techniques where prefab pieces of a dungeon are randomly put together.
I think it's neutral.. Just as incriminating as true photos can be (at least there might be some moral highground if you're into that sorta stuff), for AI faked pictures.. You may have no choice but be incriminated by photos that lie..
The term has been around in this context since at least 2018[1], and indeed I have chat logs from 2019 talking about how mtl [machine translation] hallucinates, so no, this has been what people have been calling it for a while now. Perhaps what you're seeing is just rising awareness that this is a weakness of current-gen ML models, which is great, now even monoglots get to feel my pain :V
I don't already fully trust the images, audio and videos I take with the phone.
I'm working close to HW and I actively use the camera/picture and videos for future reference and debugging. It's small, fits in your pocket, and the bloody thing can record at 240fps to booth!
Until you realize there's so much post-processing done on the images, video and audio you can't really trust and can't really know if you can turn it all off. The reality is that if you could, you'd realize there's no free lunch. It's a small sensor, and while we had huge improvements in sensor and small lenses, it's still a small sensor.
Did the smoothing/compression remove details? Did the multi-shot remove or add motion artifacts you wanted to see? Has noise-cancelling removed or altered frequencies? Is the high-frame rate real, interpolated, or anything inbetween depending on light just to make it look nice?
In the end, they're consumer devices. "Does it look good -> yes" is what thrums everything in this market. Expect the worst.
> Did the smoothing/compression remove details? Did the multi-shot remove or add motion artifacts you wanted to see? Has noise-cancelling removed or altered frequencies? Is the high-frame rate real, interpolated, or anything inbetween depending on light just to make it look nice?
This has been true of consumer digital cameras for 25 years. It's not new to or exclusive to smartphone cameras. It's not even exclusive to consumer cameras as professional ones costing many times more also do a bunch of image processing before anything is committed to disk.
Granted, there isn't the ridiculous amount of postprocessing we see in phones, but even many dedicated cameras these days don't give you an actual raw sensor dump in their so-called RAW files.
Sony is probably the worst offender here actually. The a6x00 series applies lossy (!) compression to every RAW file without any option to disable, and there's an additional noise filter on long exposures that wreaks havoc on astrophotography:
That's why I used the a6000 as an example. Even that is significantly better than what phones we're seeing today, where even the "RAW" is entirely artificial
A bit offtopic, but — a5100 is even cheaper, more compact (!) and has exact same imaging hardware, just with a slight artificial fps limitation. I've long ago upgraded to a6400 and still use the a5100 all the time when I don't plan any serious shooting.
I don’t know about android, but at least with my iPhone I’m pretty sure there are apps that can capture raw sensor data. Additionally I do have the ability capture Apple ProRAW format at of the photos. I don’t actually know if these images are still processed though.
Raw format? That means without any debayering applied? That would mean every pixel has only either r, g or b information and not combined. Be aware that there exist different debayering algorithms, sometimes applied depending on the context.
Also, without any correction for sensor calibration? That would mean every sensor has a different raw image for the same input.
My point being, without application of algorithms, the info captured by the sensor does not really look like an image.
"You just took a picture of the Eiffel Tower. We searched our database and found 2.4 million public pictures taken from the same location and time of day. Here are 30,000 photos that are identical to yours, except better. Would you like to delete yours and use one of them instead?"
so is pretty much every famous building you can imagine someone wanting to specifically photograph. Eiffel's little tower isn't unique in this regard. Chrysler's little building in New York along with pretty much every other famous building is as well. You just have to hope that your framing with these structures is not considered the focal point of your image.
I think they'll just use AI/GPT Hype and sell whatever next hypegrowth based on flimsy evidence they can.
The current narrative in financial places seems to be 2023 is gone but 2024 comes with a vengeance, feel safe investing, your real estate will grow in value! We're making people go back to the office so that it does! More lemmings! Less quality! More production!
Yes. I'm peak cynical after 3 years of self made economic destruction only to tell those who didn't cause it that "of course they must pay for the broken plates".
Misplaced ire? Maybe not well communicated in a brief spell between shopping for stuff on a Saturday.
Also, everytime someone invests a lot in explaining themselves in HN, if it goes against the current thread narrative, it'll still be ignored so it's not like it's particularly worth it. HN feels more Reddit and less Slashdot of old.
Still a nice aggregator to find stuff that might peak interest.
I find myself sometimes wondering with things like Peleton and Juicero, what would happen if Ron Popeil was born a couple decades later so he had access to the same VC those companies did?
'cause, yeah, what the world needs is YASIS, yet another stock imagery something. i'm way too outside the SV bubble to be affected by its reality distortion field to think that would be a good idea.
There was a pretty neat Google project a few years back that showed time-lapse videos of buildings under construction created entirely through publicly posted images that people had happened to take at the same spot over time.
I wonder if that'll ever cause legal problems in the future. Sorry, that photo someone took where the accused was in background at a party some years ago? He was kinda blurry and those facial features have been enhanced with AI, that evidence will have to be thrown out. Or maybe the photo is of you, and you need it as an alibi..
This is actually exactly what happened during the Kyle Rittenhouse case. A lawyer for the defense tried to question video evidence because of AI being used to enhance zoomed shots.
No that was what the mainstream media lied to you about what happened in the rittenhouse case. One of several instances where one could see fake news and straight up lies be spread in real time.
What actually happened was that a police officer testified that using pinch to zoom on his iPhone he saw kyle point his rifle at the first person assaulting him. Mind you we were talking about a cluster of around 5px. The state wanted to use an iPad to show the jury using pinch to zoom that same "evidence" because using proper zooming without an unknown interpolator algorithm the defense using an expert witness showed that this was not the case. No one in that courtroom understood the difference between linear and bicubic interpolation.
The defense did not understand it either so they tried to explain to the judge that the iPad Might use AI to interpolate pixels that aren't there and that the jury should only use the properly scaled video the court provided not an ad hoc pinch to zoom version in an iPad with unknown interpolation.
Thankfully the judge told the state to fuck off with their iPad but the mainstream media used the bad explanation of the defense against kyle when the reality was that the state basically tried to fake evidence live on stream using an iPad to zoom in.
BTW I'm German so I don't have a horse in the political race but I watched the Trial on live stream and saw the fake news come out while watching
The story isn’t “the state tried to fake” but rather the defense tried to get thrown out any image taken by a non analog camera as AI could have added detail where there is none.
I was watching this live for several days. That is not what happened. The defense paid an expert witness to upscale and enhance DIGITAL FOOTAGE using court appropriate tools. The defense never "tried to throw out any image taken by a non analog camera". They themselves USED DIGITAL FOOTAGE for their defense. I am sorry but you have been lied to by the media.
It was accurate in the sense that the lawyer used the word ai. In context the lawyer said "Apple uses logarithms (sic) and AI to enhance the images. I don't know how that works". The word AI was used but in context the non-technical lawyer simply meant image enhancing algorithms. No one in that room actually discussed AI.
That being said, one could interpret the top comment that way.
It’s funny, twenty years ago I was specifically told not to use digital cameras but the crappy disposable film camera provided in case of an accident precisely because the digital version could be contested in court.
"Good sensors are expensive"-fun-fact: Mid-range CCTV cameras often have bigger sensors (1/1.8" or 1/1.2") and much faster lenses than an iPhone 13 Pro Max (1/1.9" for the main camera). The CCTV camera package is of course far bigger though. But still kinda funny in a way.
Edit: And the lenses on these are not your granddads computar 3-8/1.0, either. Most of the CCTV footage we see just comes from old, sometimes even analog, and lowest-bidder installations.
Bruce Sterling I think had a story in that direction. A polaroid camera producer would develop photos which would've been algorithmically enhanced so that their clients consider themselves better photographers and their cameras superior. I'm regularly updated for it for the last few years when cameras are more and more their software.
Edit: fixed the author's name. Cannot find the exact story though.
This has in some ways been happening for decades. There are a few countries where the way to take a good portrait of a person is to over expose the photo, so skin tones are lighter. People bought the cameras and phones that did this by default (by accident or design in the 'portrait mode' settings). They didn't want realism.
This is just a progression of the nature of our human world - we have been replacing reality with the hyperreal for millennia, and the pace only accelerates. The map is the territory. Korzybski was right, but Baudrillard even more so.
Eventually people won’t care much for clarity and precision, that’s boring. The real problem is that everything that can be photographed will eventually have been photographed in all kinds of ways. What people really want is just pictures that look more awesome, in ways other people haven’t seen before.
So instead, raw photos will be little more than prompts that get fed to an AI that “reimagines” the image to be wildly different, using impossible or impractical perspectives, lighting, deleted crowds, anything you can imagine, even fantasy elements like massive planets in the sky or strange critters scurrying about.
And thus, cameras will be more like having your own personal painter in your pocket, painting impressions of places you visit, making them far more interesting than you remember and delighting your followers with unique content. Worlds of pure imagination.
I like the story, but I think people will notice pretty quickly as almost everyone reviews their photos right after taking them (so they can compare them with what they see in reality)
True, its just a fun story. This reddit post makes it clear, though, that while people will review the images carefully, they may still not be able to accurately determine differences.
Just take the story above with one more minor step: You snap a pic of the park, briefly glanced at it to make sure it wasn't blurry (which the AI would have fixed anyway) or had an ugly glare (it did, the AI fixed it) or worse a finger (the AI also fixed that).
You're satisfied the image was captured faithfully and you did a good job holding your plastic rectangle to capture unseen sights. You didn't look closely enough to notice all the faked details, because they were so good.
This fake moon super enhance? It already proves people will fall for it. I could easily see people not realizing AI turned the flowers in the picture more red, or the grass just a little too green, etc.
Iphones already HDR the crap out of their photos. Saturation put to max levels for that pop, colors looking only vaguely like they really did.
The contrast between what my camera raw with a stock profile puts out and what my iphone puts out is striking, and it's very clear the iphone's version of reality is optimized for Instagram and maximum color punch at the cost of looking real.
Thing is, that's what people like. So that's what we're getting.
The iPhone camera is tuned for realistic color unless you've left the style setting on Vibrant. I guess it doesn't have an "even less vibrant" style.
It does have higher than real contrast, but that's because images are 8-bit - if you don't try to fill the range, it's going to look low quality with banding artifacts.
Yeah, it exposes people in the foreground differently than the rest of the picture. That's still trying to fit them into 8-bit - basically it's trying to avoid the "black people don't activate soap dispensers" effect where dark skin isn't visible in a dark picture. (Also, if the user is only looking at a face in the picture, or if the face has a different lighting source than the rest of it, then it makes sense to calculate its exposure separately too.)
Larger cameras don't do that specific one as much ("dynamic range optimizer" or HDR programs do some of it), but they do care about skin in white balance and autofocus, and then photographers care when they're setting up flashes.
You've been able to make "unlifelike" photos ever since you could adjust the aperture, white balance, focal length, and choose the framing. How many times have you seen the Pyramids at Giza in pictures and films? How many of those times were framed to include the nearby city slums and dumped rubbish?
Not outright saying it (and not you, your comment was just a place to hang another comment off) but parent comments in this chain saying "people already fall for it" about computer adjusted photos and software might adjust how "red the flowers are" or how "green the grass is", and the parent comment saying "you can turn all this off and get RAWs" - as if they believe there is some objective truth which RAWs capture and which cameras used to show that they now don't show because of software post-processing.
My point is that there never has been, cameras have always let you adjust the shot - including "how red the flowers are" by changing light source, film type, shadows, which contrasting other colours are nearby, etc.
It's like airbrushing and similar techniques, except automated. It's better than life (face-smoothing filters, eye-enhancing filters, whitening filters, HDR that blows the colors to make up for the tiny sensor and minuscule optics, major sharpening artifacts, smoothing texture, the list goes on and on)
With old-school photo processing--yes, in a darkroom--you could achieve unrealistic results. But it was a choice. That's not what you got when you sent your negatives to the Costco to get printed. That's akin to the results I get when I use my camera, especially when looking at jpgs straight-out-of-the-camera.
In contrast, we get modern cellphones doing incredible processing to almost arbitrarily replace content with what some algorithm feels you'll like better, whether or not it resembles reality.
I lament that it usually resembles beginner photographer work, where they've just discovered HDR tone-mapping, local contrast, global contrast, saturation, sharpening, and smoothing filters, and promptly slam every single one of them to the stops. I did it, and now I recognize it when I see it in cellphone pics my friends send me via imessage.
Been there, done that. I recognize the stigmata of saturation slammed to the stops and excessive use of HDR and local contrast.
The default is excessive editing now, probably because it helps cover up limitations of tiny sensors, small optics, and poor exposure due to poor technique.
Of course they do. Why would they want lifelike photos? To remind them of how utterly crap things actually are? No, people want to have that idea of what life was like. Sharing a lifelike image on socials would get laughed off the platform. enhance, Enhanced, ENHANCE get all the likes
doesn't require one to be British to be thoroughly unimpressed with a photo and want to enhance to improve on the situation to be more in agreement with one's imagination.
The story has to get stretched a lot to imagine a total dystopia, but its possible.
People like digital mirrors for all their benefits: lighter, more features, music and weather all on one "mirror". You can have voice chats, see how you might look with various makeup/styles/haircuts. This digital mirror gets so popular, and (undermining myself here a bit) high enough quality, that people want these new cool wall screens instead.
And you betcha, AI is of course going to be added. Take insta pics without needing to hold a phone, then apply filters all in one with your voice! Some people take dozens of photos, forgetting briefly how they look.
Now, people are used to every day seeing themselves in the digital mirror with minor touchups for how they would look if they used some sponsored makeup. They used to do it daily so it would match, but kept forgetting as it always showed the improved version (with a small icon saying, you are 40% matched to the predicted image!).
Some 20-something walking around in Chicago passes The Bean, and realizes they don't look quite the same as they did in their home mirror. They take out their phone to take a pic of themselves, which is of course synced to their mirror with the same "makeup enhancement suggestions", still warning them it doesn't match.
They put away their phone, confident in the knowledge The Bean is just a dirty and distorted mirror, which is why they don't look as good. The camera has always been trustworthy. Why doubt it today?
(again, fun story, I don't think this is likely. Just plausible for some people).
I guess I havent noticed that people do that for things other than selfies.
I generally just burst-mode-scan an area or scenery location and later that night, or when I add to Strava or wherever, I have an old school contact sheet (but with 60-80 images per thing) to look though. Then narrow it down to 5-10, pick the one or two I like best and discard the rest.
It's sorta like this already, in the _present_ - people post photos with filters all the time, smart phone cameras color-correct and sharpen everything with AI (not just Samsung's). It'll just become more and more commonplace
The problem is that this particular AI enhancement was not advertised as such. Also, in the linked article it was putting moon texture on ping pong balls, which seemed like overzealous application of AI. Samsung could have marketed it as "moon enhancement AI" or something like that, which would be more honest.
My worry about these features becoming commonplace is that if everyone just leave those features enabled, we would end up with many boring photos because they all look similar to each other. The current set of photo filters, even though they seem to be converging on particular looks, at least don't seem to invent as much detail as pasting a moon that's not there.
I don’t understand why “AI” is even required for any of this, other than classification. Once classified as “the moon” the GPS and time, from the phone, could be used for a lookup table to a 10 gigapixel rendering, at the correct orientation, using NASA scans. It seems like a moon AI would give much worse results.
I still argue that my Galaxy Note 8 took cleaner pictures in general than my Galaxy Note 20. Everything feels overly processed, even in "pro" mode with all processing settings turned off.
I’ve always thought this was the final outcome of all AI; a feedback loop. Same when ChatGPT starts using things ChatGPT wrote itself as references to train itself.
We already have people demanding higher definition televisions to watch AI-sharpened 4K restorations of old films whose grain and focus would annihilate any details that small worth seeing.
There's an arms race between people adding nonexistent details to old films and people manufacturing televisions with dense enough pixels to render those microscopic fictions. Then they lay a filter over it all and everything becomes smooth gradients with perfectly sharp edges.
People want this. It's already happening. There was a post on the Stable Diffusion reddit where someone ran a picture of their grandparents through it to colorize and make it "look nicer". But it made significant changes to their clothes and hallucinated some jewelry they weren't wearing, along with some subtle changes in the faces. It's not real anymore, but hey it looks nicer right?
What you're imagining is Hyperreality from Simulacra and Simulation and has been happening since the invention of the television, and later the internet.
AI will accelerate this process exponentially and just like in The Matrix, most people will eventually prefer the simulation to reality itself.
This is my exact worry with things like chat gpt polluting the scrapable internet. The feedback loop might eventually ruin whatever value the models currently have by filling them with incorrect but plentiful generated nonsense.
I was thinking of a scenario. My children are adults and browsing photos of themselves as children. They come across a picture of the family on a vacation to the beach. They dimly remember it, but the memories are fond. They notice they are holding crisp ice cold cans of Coca Cola Classic (tm). They don’t remember that part very well. Mom and dad rarely let them drink Coke. Maybe it was a special occasion. You know what, maybe it would be fun to pick up some Coke to share with their kids!
So a future where reality and history are subtly tweaked to the specifications of those willing to pay…
Google already scans your photos folder and offers enhancements, stitches together panoramas and so on. So inserting product placement is totally believable.
These scenarios were much talked about a decade back in relation to advertising on photographs on Facebook, specially with Coca Cola and other popular brands.
That isn't far from how iphones work now. They have mediocre cameras, people only think they are good because they throw a lot of AI image enhancement at it.
Is Apple’s AI adding hallucinated details? The last I read it’s just used to merge multiple images - up to 8 or 9 images - to form the final
image. While I could see details getting lost or artifacts being added, I don’t think it can add actual “feature” details that don’t exist.
It already does some amount of features detection and targetted enhancing. Things like making this face smoother, or this sky bluer, or this grass greener, etc. From there, I wouldn't be too surprised if some details get added beyond what was strictly captured (e.g. bubbles in drinks, leaves in trees, …)
I wonder when the AI will hallucinate a gun into a black persons hand since the training black people often had guns? Hands moving fast are really blurry, so it has to hallucinate a lot, so it doesn't seem impossible. I could see that becoming a scandal of the century.
You (partly) joke but I do recall a recent shooting trial in which there was a lot of arguments about an "enhanced" or zoomed image and what was really in it.
The novelty of things like instagram is wearing off. I see more people not bothering to pull out their phone. It's not just wanting to compete with the photos taken by narcissist on the internet, it's also just losing interest, and knowing things you share can and will be used against you.
This will be the next step in film “restorations” too.
A combination of ai models trained on high resolution textures and objects, models of the actors, and training from every frame of the movie that cal use the textures and geometry from multiple angles and cameras to “reconstruct” lost detail.
Oh man... I thought you were going to, "the stop signs and strip malls were how we discovered there were aliens on the Moon (and Mars) that look exactly like us!".
Of course they would have perfect skin and expertly applied eye-liner and lipstick as well.
Quite the equivalent (to me) to many kids preferring the taste of "strawberry yoghurt" compared to real strawberries, because it's sweeter and has enhanced taste. Except for photos.
I've seen this with CGI. CGI still looks awful somehow, but people about my age think it looks cinematic, and people a decade younger think it looks incredibly realistic.
Or, you know, Samsung sells ad placements in the enhanced images to do things like turn a can of Coke into a can of Pepsi, overwrite billboards in the background, etc.
Or better, the AI improves your shitty snapshots so they come out great. Every shot is beautifully framed, perfect composition, correct light balance, worthy of a master photographer. You can point your camera any old way at a thing and the resulting photo will be a masterpiece.
The details don't quite correspond to reality; to get the framing right the AI inserted a tree branch where there wasn't one, or moved that pillar to the left to get the composition lined up. But who care? Gorgeous photo, right?
And the thing is, I don't think anyone would care. You'd get the odd weird comparison where two people take a photo of the same place and it looks different for each of them. And you'd lose the ability to use the collected photos of humanity to map the world properly.
I think it's fascinating. Reality is what we remember it to be. We can have a better reality easily ;)
That could be fine as long as there is either a way to turn all that off (or better a way to selectively turn parts of it off) or a separate camera app available that lets you do that.
It's the future. Something hit your self-driving hover car and left a small dent. To get your insurance to pay for fixing the dent you have to send them a photo.
Your camera AI sees the dent as messing up the composition and removes it.
Your insurance company is Google Insurance (it's the future...Google ran out of social media and content delivery ideas to try for a while and suddenly abandon so they had to branch out to find new areas to try and then abandon). Google's insurance AI won't approve the claim because the photo shows no damage, and it is Google so you can't reach a human to help.
> Reality is what we remember it to be. We can have a better reality easily ;)
Cue Paris Syndrome, because expectations will also be of a better reality. Then you go somewhere, and eat something, and experience the mess that actually exists everywhere before some AI removed it from the record.
Something I learned long ago is that people typically don't want the truth, in general. They want fictional lies, they crave a false reality that makes them happy. Reality in and of itself, for most, is an utter drag if they're made constantly aware of it and dwell on it. When it comes to marketing, people eat up the propoganda techniques. They want to be fed this amazing thing even if it's not really all too amazing. They love that it tickles their reward center in the process.
This of course isn't always the case. When something is really important or significant people sometimes do want to know the truth as best they can. I want to know the car I'm purchasing isn't a lemon, I want to know the home I'm buying isn't a money pit, I want the doctor to to tell me if my health is good or bad (for some, under the condition the information is actionable), and so on.
When it comes to more frivolous things, for many, build the fantasy, sell them that farm to table meal you harvested from the dew drops this morning and hand cooked with the story of your suffering to Michelin star chef and how you're saving my local community by homing puppies from the local animal shelter with profits... even if you took something frozen, slapped it in the microwave and plated it and just donate $10 a month to your local animal shelter where you visited twice to create a pool of photos to market. For many, they want and crave the fantasy.
Progress made by science and tech has, for a brief fragment of history, established techniques and made practical, in some cases, to peel away all or at least some layers of fantasy away to reality. We started to pierce into cold hard reality and separate the signal of truth, as we can best understand it, from all the noise of ignorance and fantasy.
For many fantasy lovers, snakeoil salesmen, and con men, pulling away the veil of fantasy and noise has been a threat and there's been a consistent battle to undermine those efforts. The whole emergence and perpetuation of misinformation and recent "fake news" trends are just some of the latest popular approaches. We've been seeding our knowledge and information more recently with increasing degrees of falsehoods and pure fabrications.
Now, enter "AI," especially generative flavors. The same people who wanted to undermine truth are foaming at the mouth at the current ability to produce vast amounts of noise that in some cases are almost indistinguishable from reality from current techniques we have. Not only that, fantasy lovers en masse are excited at the new level of fantasy they can be sold. They really really don't care or want the truth. They really do just want "a good looking picture", "to make the summary interesting", or just see some neat picture. They don't care how accurate it is. Now people interested in the truth are facing a deluge of technologically enabled difficult to seperate noise production.
Is what I'm looking at close to reality? How many layers of noise are there I should consider when interpreting this piece of information? In the past, the layers used to be pretty managable, they were largely physical limitations or resource limitations to falsify the data to a point that couldn't be easily discerned. These days... it's becoming increasingly difficult to determine this and more and more information in various forms are leveraging more sophisticated and believable noise production. Technology has made this affordable to the masses and there are many parties with interest in setting the clock back to a world where the best story tellers are looked at as the oracles of modern time.
People often scoff at ChatGPT that it seeds or "hallucinates" to interpolate and extrapolate gaps of knowledge and make connections but it does so in a way that people like. It projects confidence, certainty, and in many cases it gives exactly what people want. To me, it's scary because it's providing a service the majority seem to want and creating an onslaught of noise that's more costly to debunk than it is to produce.
The website's design is great and simple with no explanation needed of how to use it. Would be snappier if it matched on words automatically so I wouldn't need to press enter.
I noticed with my own writing the use of lazy structures, such as too many adverbs, very, "statement, but concession" sentences, or parenthesis for tangents.
It isn't that these things are always bad. It was only when reading my older works I realized how much I was over-using these that it made my writing as a whole worse. My way of thinking about things, imagery for scenes, everything was getting impacted by the constant use of those sentence structures. Your thoughts can't help but be impacted if you're upping the "impact" of each emotion, phrase, description to be "very" or "lovely purple".
Its OK for descriptions to stand on their own, and the same is true for simple words. A blue coat, the sad man, the lone frog in a pond in the rain.
Write different; you'll be surprised how much you think different, too.
I worked on a CPU in Minecraft years ago and designed it to be a 7-bit CPU. I chose that number because it gave me the an appreciable number of operations plus space for arguments. I had only 6 bytes of RAM and about 32bytes of ROM. The ROM was just a circle of transparent blocks (zero) and solid blocks (one) pushed around by pistons.
The whole thing was real slow, but it was so much fun trying to design something that would perform interesting calculations. I stopped working on it as at the time Minecraft had some odd bugs with pistons that would cause non-deterministic behavior.
I think the most challenging aspect wasn't the programming or circuits, which were well understood and mapped out, but trying to create modules I could copy-paste inside a special Minecraft save editor to make the machine quickly, then manually dragging out data/command lines to hook the modules together.
Brings me back to when I first worked on an ALU in Minecraft. At least at the time, it had really annoying bugs with repeaters though, where it would occasionally mess up a signal, which frustrated me enough to give up on it.
I sat alongside customer service agents working phones, email, and chat. They use copy-pasted stuff for everything. I will watch them search word for a template way slower than I could type, but that's their process.
Re-used phrases is not a sign of a bot, IMO. But it is a sign of employees who have to respond to the same thing over and over and don't have much power to effect change.
Had it occurred to you some people repeat stuff because they just want other people to believe the things they think? That's kind of how discussion works. It isn't a measure of truth to see the same line repeated at all. That's a fallacy.