This is one of the few topics on hn that I actually have some expertise in, after a ten year career consulting for the three big biometric systems; involved in standards work, accuracy testing, and R&D program management. Note that I have been out of the game for 7-8 years, but things do not change fast. There are several issues, some of which the biometrics community is aware of and some of which are outside their worldview.
The first is that the face matching algorithms are inherently less accurate on people of color given the sensors and algorithms in use. This has been an issue going back to color film being developed for Caucasian consumers - the range on a lot of sensors simply doesn't provide enough contrasting tones for brown and black people. The face detection, extraction, and matching algorithms are often not developed using datasets that match the demographics they will be used on, and at least 7-8 years ago there wasn't much focus on providing ethnicity-specific error rates. The biometrics community largely knows about this and understands it.
The second issue is that even the flagship programs don't measure Precision and Recall. They operate purely off metrics like False Match Rate and False Positive Identification Rate. Those are closed-set (i.e. "everyone is enrolled"), tech-only metrics that can be measured by NIST, that don't consider system usage or CONOPs. Program management asks for Precision and Recall metrics (not by name, because they don't know what they are), and tech responds with True Match Rate and True Positive Identification Rate. This is an entire worldview where whole-system accuracy has been placed fully on algorithmic error rates. Leadership generally DOES NOT KNOW that error rates are just as dependent on usage - i.e. that if you search for someone who isn't in the database, 100% of the matches will be false. They do not understand which part of the confusion matrix they are being told about.
The third issue is that, while these massive federal programs are run by ethical and mission-oriented professionals who have hill-climbed their way to a pretty accurate system with stable usage, the talent and professionalism isn't there across the state and local level (there are a couple bright spots where at least the tech is well understood). Face matching systems are generally not lights-out like fingerprint or iris systems (fingerprint systems do generally require a manual verification step, but that's driven more by policy and history than the accuracy metrics require); FR requires a second step where a trained face matching expert looks at the proposed matches in the consideration set and uses several techniques to assess if they are correct - it's a notable difference compared to fingerprint that candidate faces are not supposed to be presumed matches when they come out of the automated system, and even after the human expert they are generally supposed to be evidence/leads that need to be corroborated through other means. As far as I know (again, several years out of date now), no one has checked whether the accuracy of the human experts varies by ethnicity of the subject.
Even the FBI's professionalism broke down with the Madrid Train Bombing, resulting in the arrest of a random Brown lawyer in Seattle (again, no one considered that looking for an international political terrorist in a domestic criminal database meant a much lower prior probability than IAFIS generally operates with, before you even start talking about stitching together multiple partial fingerprints, and you shouldn't be considering anything in the person's file while making an objective matching decision). A local police department is going to be much worse in terms of policy and system usage.
And finally, there is the reality that once you are enrolled in a booking database, you are accused of every crime that happens in any linked jurisdiction until the system goes offline. The probability that you will have an encounter with the police is not uniformly distributed across the population, nor is the probability that such an encounter will lead to a booking where your photo is taken and enrolled in a database forever. This is not something the biometrics community really discusses at all, but every enrollment in one of these systems is hundreds of thousands of opportunities for a false match per year, hanging over your head like bad debt.
The end result is that it's not surprising that just under one in six people in America are Black, but all six of these false arrests were Black. It's also not easily fixed in one place by "giving everyone implicit bias training."
> Leadership generally DOES NOT KNOW that error rates are just as dependent on usage - i.e. that if you search for someone who isn't in the database, 100% of the matches will be false. They do not understand which part of the confusion matrix they are being told about.
In 21-st century should it be a mandatory requirements that people in leadership positions in highly technical fields must have at least basic technical qualifications in the areas they are making decisions?
Who defines that, though? As an example, India's Aadhaar biometric ID was set to have a "constant FPIR."
False Positive Identification Rate grows as the enrolled database gets larger, but True Positive Identification Rate does not (it actually goes down a tiny bit, as some true matches get lost due to lower-accuracy initial search passes). Both FPIR and TPIR assume that the entire universe of possible people is enrolled - i.e. that you are in a "closed set." So what they're really telling you is what the error and success rates are given how many people are in the universe that you are trying to distinguish from each other.
But FPIR is a closed-set metric, that tells you how many false positives you will generate per search. If not everyone is enrolled, you have an open-set problem. And the rate users actually care about is Precision (True Matches / (True Matches + False Matches)). They want to know if the matches that come out of the system are correct.
So looking at rates, that's simply TPIR / (TPIR + FPIR), right? And TPIR is largely constant as N grows, right? So FPIR is the effective error rate, right? Pretty much the whole industry agreed, so Aadhaar dynamically changed its threshold during initial enrollment so that FPIR would stay constant and therefore Precision should stay constant.
** Pause here to see if you can spot the issue **
The intuition that should have screamed out at everyone is that comparisons are independent events and it would be really, really weird if the precision of any given returned match depended on how many of these events were independently happening in other computing threads. It shouldn't matter at all how many independent comparisons are happening. Because they're independent.
TPIR doesn't grow because it assumes everyone is enrolled. Which is definitionally not the case during ramp up. If you want to know how many True Matches the system will spit out, you need to multiply TPIR by probability of enrollment! The Precision is defined by how big of a population you are trying to distinguish, not how many of them are enrolled in the database. The database size is basically used to give you a prior that any given comparison is true; the simplest prior is 1/pop_size, which is the same a 1/db_size for a closed set. But not for an open one.
If the database grows by 10x, you get 10x the false matches because FPIR goes up and you get 10x the true matches because they are 10x more likely to be enrolled. In other words, 10x the matches with the same ratio of true to false. Instead, they lowered the threshold early in the lifecycle, making it more error-prone. This system cost more than $1B in a country where that's a big deal, and had a lot of eyes on it. Trust me, the "experts" were involved and a bunch of them had PhDs. Didn't help.
> All six known reports of false arrests due to facial recognition technology were made by Black people.
It's fascinating how sample size differs based on who wants what policy to be enacted. What are the false arrest rates due to people just looking at a picture and thinking "That's the one"?
Without comparing to more common forms of ID, I don't see how this data proves much at all.
Furthermore, with a large black population like Baltimore, if they mostly ran the software on black people, then yeah the false IDs will also be mostly black people.
I don't want to let the cops off the hook here. The previous commenter suggested a mechanically plausible way: "decent software used in a racist way" would have the above problem
Unfortunately in America it is very difficult to sue or otherwise obtain redress of grievance when the perpetrator of an act is a police officer. It is almost certainly easier to sue the company to stop the software's use than sue the police to stop them from using it.
Not sure why this is being downvoted. This is a factually accurate description of an issue highly relevant to the topic.
If you aren't aware of the specifics, research "qualified immunity" , a legal doctrine that was not passed into law but invented by the courts to shield police from the consequences of violating your civil rights.
I’m of the impression that it’s relatively easy to sue, but that the municipality will just settle for millions and not fire the police officer lest they incur the wrath of the police union.
Wait Baltimore and Detroit are like ~60-80% black, and people are drawing inferences about set of six where the people were black - instead of say 4-5 black and 1-2 white?!
[this doesn't excuse the police actually arresting the people]
I think you'd need to know the distribution of criminal suspects, which may or may not reflect the general population. For example if 50% of the suspects are white, then 80% of the population being black wouldn't matter. Right?
EDIT: On second thought perhaps not, I think some are arguing for a biased training dataset, which would presumable be the "population."
No idea - my whole point is that without more data you can’t really draw any inferences here, (e.g. it’d be nice to know how many people were sent for facial recognition, and the ethnic composition of that group, the. It would be nice to know the misidentification rate in that BEFORE the police went out arresting people), one answer here may be that facial recognition’s failure rate is quite high, but the police don’t do the requisite follow up work when it’s a black suspect vs a white suspect. Which brings me back to my original point, we need more data, but what limited data that is supplied doesn’t YET give us a smoking gun.
[separately I’m hopeful that those wrongly arrested enjoy success in the civil courts]
While I have no doubt of the potential for facial recongition systems to have NUMEROUS inbuilt biases, including even IF every actor involved in their creation and deployment held no ill intent. There have only been six such cases apparently. It seems a bit premature for this kind of hedline.
That also being said, this kind of software is just bad in general, even outside of the biases thing.
One more thing. A quote from the article: "Detroit's police chief said their facial recognition technology, when used alone, fails 96% of the time, Insider previously reported." - What?? That's insane. Surely those numbers can't be right? the source on that he goes into more detail with similar specific numbers. I really hope it doesn't only exist to give more ammunition in court, and they don't care whether it works at all...
> "Detroit's police chief said their facial recognition technology, when used alone, fails 96% of the time, Insider previously reported."
When it fails 96% of the time, are they given a hundred possible matches to scroll through? If so, that's concerning to the majority of people showing up without any affiliation to a crime.
In northern Cape York (Australia) I have several friends who cannot be successfully photographed because their skin is so dark that cameras are unable to resolve their facial features.
I don't expect facial recognition to be much use on Cape York.
> In northern Cape York (Australia) I have several friends who cannot be successfully photographed because their skin is so dark that cameras are unable to resolve their facial features
If recent flagship cellphones can take amazing pictures at night time, then perhaps there's no one who has "skin so dark cameras are unable to resolve their facial features". It's a limitation of the Cape York equipment; they ought to have procured cameras with higher dynamic range and/or adjust exposure. If it's an ongoing problem, then the people in charge haven't bothered to research and/or replace the shitty cameras that literally have one job. They ought to spend a little more money on better camera sensors that are critical to basic service delivery.
This is like saying "There are people with names so complicated that the Cape York software can't handle them", just because the names have an apostrophe/diacritic/multiple last names. The problem is with the system, not the people.
Of course with good equipment and care I'm sure you're quite right.
I admit that I should have specified "casual" photographs, and I now sheepishly acknowledge great advances in camera technology since I followed that claim up with a modern digital camera.
Yet even with good quality film cameras, and darkroom development, people have always had difficulty photographing very dark skin. So I expect that will be a continuing problem for facial recognition.
> Yet even with good quality film cameras, and darkroom development, people have always had difficulty photographing very dark skin. So I expect that will be a continuing problem for facial recognition.
This is simply because film equipment materials and processes have also been optimized for lighter skin. It isn't a problem with dark skin, it's a problem with photography as an industry systematically privileging better results for lighter skin, and even defining paleness as "normal".
1) technical limitations - even high end color printers today, have problems with reproducing various shades of brown, which is what almost all Black people in the USA at least, are (there are a few people with very dark skin which is more truly black).
If you want a torture test for a color printer, take a photo of a brown/beige wall with a medium brown sofa in front of it, lit by a single light source from one side. The range of brown colors will be very tough to produce properly.
2) tuning limitations, often of the photographer - you can look at black and white photos from even the 1940s for instance of musicians and the black musicians are exposed and developed properly. See for instance the first photo here: https://unblinkingeye.com/Articles/Harvey/harvey.html or an image search for "1940s jazz musicians"
The school photographer that the person mentioned in the article just didn't know how to adjust the exposure properly; they likely set up a distance from the camera, with a set amount of flash, set the same exposure for that school like they did for the 20 others they were going to, and popped off all the photos at the exact same settings.
Whenever you see an article about someone complaining about "privileged photographic chemical processes" -- check to see if the author of the article ever talked to an actual color scientist.
Note: there are some changes in the chemistry to allow for tinting - but that is a bias that can be overcome quite easily if photographing people who are of a different ethnicity.
> 1) technical limitations - even high end color printers today, have problems with reproducing various shades of brown, which is what almost all Black people in the USA at least, are (there are a few people with very dark skin which is more truly black).
I am not a color scientist, but I am a photo- and videographer and have done print design and I've had reason to stare into video codecs in the past, so while I'm not an expert and would not claim to be one, I have some familiarity with digital and paper color reproduction.
You are, to be 100% clear, correct about this in the current state of the world. But it's also quite easy for this to be a just-so story: printers are bad at this, sure, but I'm aware of no reason that they have to be.
Analogies are always suspect, but consider: 15-bit RGB color is noticeably less able to articulate perceptual green gradients, which is why a common color format is thus 565--that's optimizing for human biology. We sell monitors to humans, not to dogs. Now, take it back to film and to paper: is a roll of film or a printer dealing with hard limits of physics or biology, or just optimizing for the assumption that the people guiding the research, specifying product output, and (for a long time) buying that output...had lighter skin?
It doesn't have to be a conspiracy. It can just be the local maximum of capitalism.txt leading to worse global outcomes.
I think it is that on monitors we use RGB and on paper, if printing, we use CMYK colors; so there is a problem just going from on-screen to print in some cases. And brown is a tertiary color:
"In a RGB color space (made from three colored lights for red, green, and blue), hex #964B00 is made of 58.8% red, 29.4% green and 0% blue. In a CMYK color space (also known as process color, or four color, and used in color printing), hex #964B00 is made of 0% cyan, 50% magenta, 100% yellow and 41% black. Brown has a hue angle of 30 degrees, a saturation of 100% and a lightness of 29.4%." (of course different browns will vary)
Since the mixing requires magenta, yellow and black, it is just that much more difficult to get it exactly correct.
If the world had used something other than CMYK, perhaps it would be easier - but it would likely still be a tertiary color which requires very careful quality control; and as you no doubt remember, early color printers suffered from "metamerism" where the perceived color/intensity would vary depending on the angle of the light hitting the ink on the paper, or the angle at which the paper was viewed.
Brown is a "fake" color, in both additive and subtractive realms, indistinguishable from dark orange without contextual clues. Or one could be funny and say that orange is really "bright brown", either way.
What are you talking about? Racial bias does not affect the reduced ability to project shadows onto dark substrate. No matter how much or how little ilford optimized for pasty English skin, the physics doesn't change: black absorbs, white scatters.
Btw this is exactly the same reason why camera lenses are black on the inside.
Some think that if white people had been a dark colour, photography of dark skin would have been a priority, and the technology would have been developed accordingly.
But that's not my point. I have no problem also asserting that American film makers, because of their past racists prejudices, didn't bother to research a workable solution to a fundamentally physical problem.
My point is that any new visual technology will have problems capturing detail of black any objects. Vintage film didn't capture black detail because information is lost because the object destroyed it.
Btw, the problem still exists. Kodak didnt magically conjure a product with ten more stops of dynamic range. Maybe a stop or two and lot of heuristics. Grab the fanciest camera you can and try taking a picture of an object painted with vantablack. It is impossible.
Likewise, I have zero surprise that ppl with dark skin trip up vision systems that have been widely deployed for only ten years. That is not evidence of researcher bias; its evidence that its a new technology trying to get something to work. If this issue is ignored for next 90, than we can conclude that there is bias.
So what to so until then? Just let black people be victimized? NO! How about we throw away our techno-dystopian state and its abominable technology instead?
> photography of dark skin would have been a priority, and the technology would have been developed accordingly
No "if" about it - your hypothetical translored. Kodak only got better at capturing darker browns after chocolate and Furniture industries (lots of wood hues) complained about not being able to get good pictures quality of their products.
A team of two unlikely businesses—furniture makers and chocolate manufacturers—protested against Kodak’s films for discriminating against dark hues[1]
It is extremely difficult to get definition on a black cat no matter what technology you use. For that matter, it is harder to see definition on a black cat with the human eye. Just like it is harder to see definition in the dark. Its called color theory, refraction, absorption. Its also difficult to get depth and definition without reflective distortion on metal jewellery. Are we racist against metallics too?
> ...people have always had difficulty photographing very dark skin.
This appears to be a misinformed statement. Those who make it their business to take pictures of and process images of darker skin subjects, know how to use the technology, cameras, and software for better or optimal results[1][2][3].
The greater issue is about becoming knowledgeable and understanding the different techniques and lighting involved when dealing with lighter or darker subjects. There are those that are too deep into bias or too stubborn to study up on the subject, simply don't care to, or refuse to have different settings for people of different colors.
Unfortunately, it appears some are pushing claims or perpetuating stereotypes, as if it's not possible to solve the issue or that solutions haven't already been in existence (and in use) for the last 35 years (at least). If it would continue to be a problem, it's more a matter that certain jurisdictions and those in charge, care not to solve the issue or do anything about it. They might be quietly fine with mistaken arrests, false imprisonment, misusing AI, or abusing populations of color in general.
Facial recognition is not done with staged lighting. Your examples simply prove that photographing darker colors, whether they be skin, fabric, fur or any other colored material requires specific lighting tocapture detail appropriately.
Im addition to lighting, photographic details are also affected by the size of the aperture, exposure and sensor size. The no one thing that's variable at runtime is exposure.
Auto-exposure can trivially solve for this when you optimize the image for the target subject. If you care for facial recognition, then a washed-out (or dim) background won't matter. Adjusting exposure is not rocket science.
The only thing that matters in this case is the types of devices used to make the pictures in the training data, and the device being used to make the picture that is analyzed by the algorithm. Advances in computational photography software in “flag ship phones” is not really relevant. Also both sides of this argument are making the mistake of equating photos that are visibly recognizable to a human, with a photo what would provide enough distinct features that an algorithm could make an ID prediction. There could be some bias in when people assembled the data set they chose humanly recognizable images, but it’s not necessary. Also it’s obvious that the kind of features on a pixel level that would be identifiable on a photo of a dark complexion and a fair one would be different. so either you train separate algorithms or you make sure your training data set has enough examples of both.
I come from a nearby area to OP (Daintree) and have witnessed the same problem about 8 years ago. The new samsung gear (even the lower-end A52) no longer struggles with this. I can't speak for other brands.
> Have you seen photos taken with “recent flagship phones” of his friends?
While you were being snarky - this is a great idea for an experiment, if gp can give judgement on pictures of their friends taken by a recent Google Pixel phone. I say pixel because Google boasts its photography AI makes and Pixel hardware people with all skin tones look good.
Pretty sure wedding photographers solved this problem a long time ago, though I hope none of them let the facial recognition vampires in on the mysterious secrets of basic lighting and exposure.
Wedding photographers had a hard time with dark-skinned subjects. Color film was fine-tuned for specific matter back in the day, and the two common varieties available were (1) optimized to make light-skinned people look nice, and (2) optimized to make vivid colors. These two approaches were at odds, because anything which made vivid colors would exaggerate skin blemishes.
Films in category #2 sometimes worked better with dark-skinned subjects, because these films tended to be more neutral overall, but this was a bit of a crapshoot. Anyone shooting light-skinned subjects had an easier time, because they had films which were purpose-built to make their job easier.
Examples of film pairs in category #1/#2: Kodak 160NC / 160VC, Fuji NPS 160 / NPC 160. Maybe Fuji Astia/Provia. Velvia 50 fell solidly in category #2. There are also films that don’t really fit in either category, like Kodak 100UC. Digital cameras contain processing that makes many of the same tradeoffs as film, it’s just that you have a lot more flexibility to make adjustments on the digital side of things.
Designing cameras to be “neutral” or “accurate” is not really an option, when it comes to color, due to the sheer complexity of the problem. This is a lot more complicated than just thinking about RGB or XYZ. You get to choose between different inaccurate options, which are each optimized for some specific set of use cases.
Most wedding photographers aren't using film. Film is a very small niche these days. These days you shoot in raw with a digital camera, vary the exposure if needed, and set the white balance as needed when editing. If you know how to use your equipment you shouldn't have any problem taking photos of people of any skin color, not even the lightest person next to the darkest.
They were using film “a long time ago”. Wedding photographers have only been shooting digital for 20 years or so.
Early digital had a somewhat poor dynamic range. It’s gotten better, but it was really rough in the early days.
> If you know how to use your equipment you shouldn't have any problem taking photos of people of any skin color, not even the lightest person next to the darkest.
This statement seems so obviously and fundamentally wrong that I must be missing something. I meet enough photographers that know how to use their equipment, but have never taken a decent photograph in their life, except one or two exceptions which appear to be accidents. This seems like a very natural and understandable thing to me—there’s a lot to painting besides understanding how to use brushes and paint, and there’s a lot to building a house besides knowing how to use a nailgun and circular saw. There’s a lot more to programming than knowing how to use your compiler.
Maybe I’m missing the meaning here, because it doesn’t make any sense to me.
> These days you shoot in raw with a digital camera, vary the exposure if needed, and set the white balance as needed when editing.
That’s the first week of photography 101. Have you taken more advanced photography courses? Have you ever taken a photograph of a dark-skinned subject, wearing a pure white dress, and then printed it out? My personal experience is, that once you print your photos out (which is normal for wedding photos), and get some people with an experienced eye to look at them and give them a critique, you start to develop an appreciation for how understanding your equipment & process is just the barrier for entry to the field, and it makes you a novice.
Let me tell you—it is not trivial to make the picture show a flattering image of both the dark skin, and the white dress, with the delicate details in both. You need to know a lot more than just how to use your equipment. It is especially difficult if you are not in control of the lighting.
The raw workflow is nice, but your raw file goes through a lot of processing to turn into a viewable image, and many of the stock presets are modeled after the way film does it. You’re not really escaping the problem, in any sense, by using raw.
All I can say is that neutral color negative films have more than enough range for a black person in a white dress, and any wedding photographer who had too hard a time nonetheless would soon find themselves no longer photographing black weddings.
I must emphasize that there are no “neutral” color negative films. There are just films optimized for different use cases.
Once you dive into the sensitometry of how color films work, it becomes apparent that everything is a tradeoff. You have three light-sensitive layers, and you get to control their contrast curves, and you control their spectral response with sensitizing dyes. The physics and chemistry of light-sensitive materials imposes some constraints on the spectral responses of the different layers (which, if you think about it, explains why the different layers are always stacked in the same order, across different types of film).
Some films are advertised as “natural colors”, like Kodak 160NC, but if you dive into it, you realize that there is no such thing as “natural colors”, there is just a certain look you get with a certain film in certain conditions.
It’s not like it’s impossible for wedding photographers to shoot dark-skinned subjects at a wedding, the point is:
1. There are some difficulties inherent to the problem,
2. Your equipment (film or digital) is not configured out of the box to take good pictures of dark-skinned people.
A lot of work was put into films and digital cameras to make light-skinned people look good. It’s something people care about, and various companies invested money and made it happen. It will also take work to make pictures of dark-skinned people as easy, and there are still a lot of equipment and photographers out there who are bad at it.
You'd be wrong, of course. The dynamic range of the film was out of their control. And, not shockingly, a lot of lessons film companies learned were lost in the transition to digital. Or rather, they are taking time to spread there.
Is annoying, as it is not a conspiracy. It is hard, though, and we see the speed of knowledge spread more now. Or, rather, the relative lack of speed.
What a red herring. Whether or not the photographer controls the dynamic range of the film has nothing to do with whether the range is wide enough or whether the film has a density curve suitable for photographing a black person.
But it has a very real impact on the settings you have to use otherwise to get a good shot?
That is, I'm mostly inclined to agree with you. But I also trusted a ton of the other discussion I've seen on this. Which seems to be largely in agreement that film itself had to change to get good pictures of darker skinned people.
I got the impression that if you go with black and white, you dodge many of these problems, amusingly. But adding color to the mix makes things very difficult.
There is such a thing as lighting, type of lighting, using brighter lights, and processing of images[1][2][3]. From a technological and photography point of view, these types of issues can be resolved. Often the issue is the refusal to create a different setup for darker subjects, or purposefully injecting stereotypes or bias to not find or use obvious solutions.
Furthermore, because people have dark skin, doesn't mean they have the exact same bone structure, facial proportions, or symmetry.
"Cannot be successfully photographed" Um... What?
I may not be a professional photographer, so perhaps there are some color correction nuances I'm not detecting, but I've been seeing pictures of very dark skinned people for DECADES with absolutely nothing particularly jarring or memorable about the quality, or lack thereof, of the image.
I recall Google making a huge deal about this at their last event. Like, "finally we can photograph black people because all our equipment isn't racist anymore!", and I didn't understand it then either.
Taking photos of people with darker skin tones isn’t harder, but it is different. And if you don’t know that, those photos will often lose facial features and look flat.
There’s lot of history here, some of it debatable, but the general story is…
Early film developing used a sample photo of a white brunette as the standard for white balance.
Some time later, ad agencies complained to Kodak that photos of chocolates and dark woods weren’t quite right.
Kodak made some tweaks. And/or photographers learned to deal with it (both for product photos and people).
Digital cameras appear. Similar mistakes/oversights made by industry.
Now Google and others claim their cameras can do dark skin.
What's the most parsimonious take.
1) at the inception of ubiquitous photography there was a path of equipment development that allowed for equal quality images of any skin tone, but those in charge spitefully decided they didn't want people with dark skin represented.
2) seeing detail in dark objects of any kind is more difficult (something we all know from using our eyes) and no one felt compelled to spend a lot of time wrestling with this very difficult problem when, at the end of the day, everyone has the general sense that, despite never having met him, they have a good idea of what Miles Davis looks like.
I don't think there was any spite involved in Kodak's early film work. Or imaging work by Olympus, Sony, and anybody else was involved in early digital camera development.
But, by the time we got to Google and Apple? The problems with photographing dark subjects would have been well understood. It's sad that it took being "outed" in the media for those companies to actively address the issue.
What I don't know is how much of the issue was "new" (re-introduced by the software behind the phone camera magic) or "existing" (the same problem with color/white balance that has existed since the dawn of photography). If it was "new", that's rather damning, IMO, since the problem space was known, and the product people chose to ignore it. Put another way - some of this is just physics, but how much was physics and how much was software ignoring one set of subjects in an effort to improve outcomes for another?
But, no, probably not intentionally spiteful. Just some combination of lazy, greedy, and ignorant.
You'll be shocked to know that digital signal processing was later getting to this. Not for conspiracy reasons, but still later. And then ML was later still. Again, it is all there to be learned, but so many in emerging fields don't know what they don't know. And in images and sensors, many newcomers think things are showing an immutable and fully generalized fact.
Maybe it is hard to take a good photo with a phone in point and shoot mode. But any professional photographer and most experienced amateurs can absolutely take a good photo of them.
There are only two provisos: they have to use a manually adjustable camera, and they have to be able to control the lighting (even if only by asking the subjects to move around).
If every facial recognition camera was set up by a professional wedding photographer we'd be golden.
But seriously, the minimum requirements for contrast and low-light sensitivity for facial recognition cameras should be raised, and the algorithms should do a better job of specifying confidence. If the algorithm specifies the match is not reliable, maybe police officers won't bust down the wrong guy's door.
Very cheap small-sensor camera modules sometimes have as little as six bits of dynamic range, which will genuinely cause difficulties in photographing very dark-skinned people. A decent smartphone camera sensor will have at least ten bits of dynamic range and flagship mirrorless cameras are exceeding 14 bits, which approximately matches the dynamic range of the human eye.
For a reasonably good, reasonably modern sensor, any problems are somewhere else in the imaging chain. An 8 bit JPEG actually has about 11 bits of dynamic range due to the gamma curve, but excessive compression loss or color space errors might cause clipping. Most displays and commodity printers will have less than 10 bits of dynamic range, so you're going to see some level of dynamic range compression. Any image processing algorithms could cause a detrimental loss of dynamic range if they're weighted towards preserving highlight detail.
I have absolutely no doubt that someone from FNQ or the top end might experience real problems with cameras and I have no doubt that those experiences are hurtful and marginalising. Those problems are the result of either poor quality or poor design choices, not any inherent limitation of imaging technology.
You need to look at it from the perspective of the system. The system has a database of individuals and is asked "does this photo matches any of the individuals in your database?"
If someone whose photo is basically a dark circle with eyes is entered in the system then the next time you ask the system to identify a photo that is basically a dark circle with eyes the system will respond to with the previous match.
Is it plausible that cameras have more trouble with dark skin because of reduced contrast of facial features? Or is that already studied and discarded, and this is just a training set problem?
That's very plausible. But then that means there is an inherent bias against people with darker skin. So the more interesting question is: why wasn't that bias anticipated and caught, and if it was known, why was it allowed to go to production where it affected the lives and liberty of one group of people disproportionately?
No consequences, profit-driven company, metric-driven agencies, nobody cared.
Constructing a modern-day criminal justice system from the ashes of the Bad Old Days is going to be a project for our generation or the next or the next after that, but it probably requires burning things to the ground.
In the meantime: Most of the liberal democratic ideals you and I were raised on with regard to criminal justice were from academia musing on what should be, or from copaganda television shows lying about what is. They live in our heads - if you want them to be real, you have to make them real.
There's also the chance that they didn't care to build their training set to match the expected population. Another industry dealing with this is automotives. Until recently tests were done exclusively on male crash test dummies. This resulted in higher mortality rates in women as things like seatbelts and airbags were tested on models not reflecting their bodies.
This would all be because either they didn't care or they didn't realize they should care. Which is to say, I agree with you and the problem you touched on is far more pervasive and effects more people than just the legal system and we need to start doing a better job addressing it.
Because the people selling the software reported the false positive rate a x per y but just accidentally forget to report that ALL of the false positives are of one single group. I would
love to be in a sales meeting with a company that develops this type of software and the police that buy it.
it is definitely part of the problem. historically, film was designed for optimal rendering of light skin, with little consideration given to darker skin. that trend has continued with digital sensors and modern image processing. any photographer or videographer that regularly images black skin is aware of this.
yes, it is also a training set problem. training on real-world data will reproduce racial bias, because racial bias exists in the world. racism in, racism out.
but the more direct cause of this kind of thing is the racism of the police using these tools, the states deploying these tools, and the private enterprises making these tools.
> yes, it is also a training set problem. training on real-world data will reproduce racial bias, because racial bias exists in the world. racism in, racism out.
That assumes that all disparities of this kind are due to racism but the fact remains that the disparities in the racial description of arrests matches that of descriptions given by victims when they report crime. Could an entire nation including black people (who are proportionally more likely to be victims that other races) be in on the conspiracy?
Let's focus on trying to avoid innocent people being victimised, either through crime or misapprehension, than making the assumption of ill intent where there is none.
The studied problem is that with representation in tech at all levels, the product wouldn’t have been released at all or greenlit to begin with.
I agree that “the most qualified person for the job should be hired”, we just dont agree when I point out those qualifications have nothing to do with prior professional pedigree or academic name branding, as opposed to having a different background since that would be more useful here to say “hey wait a minute this product doesnt work”
sparing all of us from the debate and just having better technology
Just like they do with black cats, dark clothing, lack of lighting. Dark colors absorb light, light colors reflect light. It is the reflection of light on a surface that gives it definition. I had a black cat that was nearly impossible to get spontaneous pictures of and I take night photography. Darkness requires more exposure time which equates to standing still longer. Facial recognition grabs people moving around in whatever lighting is available. It will inherently do better on lighter colors and contrasts just like the human eye does. Why do burglars and stage hands wear all black? So that they are harder to see.
As someone with an extensive background in facial recognition and feature identification, yes, this absolutely is the case. You need more training data and more preprocessing, and often you just fundamentally can't get the same accuracy you can on pale faces because the information isn't there in the pixels. Same reason why it's harder for self driving cars to see black pedestrians at night. Fundamental information content. Not "racial bias". Of course, the word racism has that peculiar property of short circuiting our critical thinking because the outrage is so tantalizing...
Not just the cameras themselves, you take low contrast footage then throw Video Compression on top of that, and you're not left with much to work with.
"the Baltimore Police Department ran nearly 800 facial recognition searches in 2022. The Detroit Police Department makes about 125 facial recognition searches each year,"
So, 6 total that we know about, over 925 searches? How many human-initiated IDs (that is, searches by cops themselves, however you want to define that) led to false IDs?
Why do you and others on the thread keep asking that. If a human made a mistake he or she is held accountable individually and punished. The software cannot be punished, it does not have the ability to fear punishment and be perfectly careful.
The point you are missing is that the software isn't augmenting human responsibility, it is largley replacing it. Even if not on paper, humans now have an excuse to lay responsibility on the software.
This is the same problem I have with ML and my work, if I have to triple check its output anyways then what good is it? If I don't, who will take responsibility for the mistake?
Even if humans made more errors, you can tune humans by adjusting their incentives. For example, if they faced a couple of years of jail time would humans keep the same amount of error rate?
The proper use of such software is to narrow down the list of suspects, humans then have to do much more vetting and verification, assuming the software's results are false before concluding the identification.
The time saved scouring through a large list of suspects should be spent painstakingly vetting and independently corroborating software identified hits.
There is no replacement for humans doing the actual/final identification and assuming the responsibility if their mistakes.
Engineers can be held accountable for a failure in a system they said would work, and it happens when something goes wrong with a bridge or building. I do not agree with the point. It is like saying, "A condominium in Florida cannot be held accountable." :-)
We're talking about systems, algorithms, black box models that we don't understand or have the ability to reason about, that are being used to make decisions for us.
Those systems are already proven to have embedded biases.
But now instead of a human we can question, we only have inscrutable machines that no one, not even the people who built them, understand.
If condominiums in Florida were making the decision about who could get a mortgage and live in them, then your analogy would make sense.
But they don't.
There are, on the other hand, computer systems that absolutely do, and it's very possible (I would predict probable) that they will end up effectively re-creating redlining.
Furthermore, going back to your broken analogy, unlike the case with building codes and engineering professional organizations, there are no laws or regulations that could be used to hold accountable those who build or use those systems.
You can inspect the design of a condo but ML models can't be examined, their training data is the only thing you can examine. It's like being able to inspect the raw materials but nothing else for the condo.
And further to your point, if a DA's office keeps arresting people incorrectly, they won't keep that job.
The fact that we as a society are not okay with just arresting the wrong person is how we started talking about 'stop & frisk' and its shortcomings in the public domain. It's how we had BLM summer and many other protests after the killing/arrest/etc. of innocent people.
and that's how we should respond when the wrong person is arrested
who can I call when "the AI got it wrong"? It's worse than google customer service.
The reality here is none of that will happen in all but the most egregious cases.
If either of those scenarios came to pass the police unions stand ready to defend them. At most the cop will get some paid leave and a settlement payout (as well as the victim at great expense to the town/city) and maybe cop will be transferred to a new department while the whole complaint is put under a NDA.
> Even if not on paper, humans now have an excuse to lay responsibility on the software.
This, right here, is the real key and I hope people really fully appreciate this.
We can now have machines that can be biased for us, and when that bias becomes apparent, people can shrug their shoulders and say "what can you do, it's the machine's fault".
That is a very dangerous road, and we're already well on our way down that path.
It honestly seems to me that the core issue here is actually the same one found in digital voting technology - where is the vetting? There must be vetting at multiple levels of the process, or these use cases are negligent at best and malicious at worst.
"Our highly advanced computer technology has analyzed the voting results, and thrown out all of the results it has determined to be fraudulent. If your vote was thrown out due to the algorithm's fraud determination, you will be arrested for attempting to subvert democracy. What's that? You want to double check the computer? That's not necessary, because it's a computer after all, it doesn't make mistakes."
"Our highly advanced computer technology said with 100% certainty that you are the person responsible for this murder. You say you were on the other side of town at that time, and you claim to have witnesses. This is impossible, because our computer algorithm doesn't make mistakes, it's a computer after all. You are hereby sentenced to the death penalty, and the computer has recommended that you receive no appeal, since it has determined you to be guilty beyond any doubt."
> There must be vetting at multiple levels of the process, or these use cases are negligent at best and malicious at worst.
You've got it exactly.
The problem is modern AI systems cannot be vetted because neither their creators nor their users actually understand how they work (yes, the mechanics are understood, but the emergent properties are not).
I believe we need regulations that prevent the use of unauditable technologies in a bunch of key industries, including banking, law enforcement, healthcare, etc. Basically anywhere where a person's rights may be violated by these systems.
We actually fought this same battle not too long ago with photo radar systems due to issues with their accuracy, and the result was mandated transparency (at least in some jurisdictions). These new technologies just supercharge the problem.
Maybe try reading up on the reasoning behind the whole "blameless retrospective" thing that's popular to advocate for when software development discussions pop up around here?
Blameless retrospectives only make sense when everyone is actually incentivized towards quality of output. That is historically true of neither (American) police organizations nor the vendors who sell to them.
There's work going on to look a little further into the infrared to fix this.[1] See the image set "Sample database images in all three spectra". High dynamic range images help, a lot.
I had a philosophy teacher that had worked for polaroid for a long while developing film chemistries. One of the things polaroid had to put large amounts of special effort into was tuning chemistries so the facial features from dark skinned people would show up decently. This was apparently related to, but ultimately a separate problem from, environmental low contrast.
So when electronics get confused I’m definitely not surprised.
In America "white" is not a colour, Italian or Arabs with dark skin are also "white", unless they are from south America, then they are latinos. The only true meaning of white in the US is "not black", so those things working in Japan is not a valid counter point.
It's true that "white" is not a skin color, but also let me point out that people with lighter skin tones than what is commonly called "white people" are hilariously considered "people of color".
Skin "color" is some diffuse concept that doesn't really have much to do with the actual color spectrum and, in many cases, not even physical features. The extremes are easy to tell, but there's a whole world in between with some pretty arbitrary definitions.
I can assure you Middle Eastern doesn’t quite qualify as White in the racist parts of the country. Even if that’s what the US Census Bureau would have you select.
He's trying to generate a post-hoc justification for an inaccurate statement, there's nothing more to it than that. Still, self-identification matters and I wanted to see if he'd cross the Rubicon and try to force the white identity on groups that don't want it.
Bearing in mind his linchpin example - Arabs - are neither perceived as nor self-identify as white [1], it's bizarre to suggest that it follows that the Japanese, a separate case anyway, must also be white.
Add to that his conflation of ethnicity with race [2], and it might be the single most stupid comment I've ever seen on HN. Just say "OP meant to say black", there's no need to construct an alternate reality in which non-white and black are synonyms.
"many in the MENA community do not share the same lived experience as White people with European ancestry, do not identify as White, and are not perceived as White by others."
> "We believe this results from factors that include the lack of Black faces in the algorithms' training data sets..." the researchers wrote in an op-ed for Scientific American.
> The research also demonstrated that Black people are overrepresented in databases of mugshots.
The sort of clear-headed thinking that makes the AI bias field as respected as it is.
The actual quote that the mention in the article refers to:
"Using diverse training sets can help reduce bias in FRT performance. Algorithms learn to compare images by training with a set of photos. Disproportionate representation of white males in training images produces skewed algorithms because Black people are overrepresented in mugshot databases and other image repositories commonly used by law enforcement. Consequently AI is more likely to mark Black faces as criminal, leading to the targeting and arresting of innocent Black people."
So they're saying that simultaneously the training set has too few black faces and the set being compared against has too many.
> Consequently AI is more likely to mark Black faces as criminal, leading to the targeting and arresting of innocent Black people.
I don’t see how this relates to simple facial recognition. It doesn’t appear that they’re scanning for “criminal physiognomies” but for specific facial matches.
Furthermore, it seems that this whole line of argumentation implies that facial recognition software may be mistaking innocent Black people for non-Black perpetrators, which I don’t see any evidence for. How does this increase arrest rates for Black people if AI just can’t tell them apart? In all likelihood, the person who got away is also Black.
It doesn't imply that it's matching black people to white perpetrators. The claim is that A) the model itself is worse at matching for black faces and B) the database being searched against is often disproportionately made up of black faces.
Give it a photo of a black person to search on and you're probably getting a black person as a match, but the likelihood that it's actually the same person is lower than it would be if you were searching for a white person.
The quote doesn't say it's increasing arrest rates for black people, but arrest rates for innocent black people. If you use facial recognition and it's 99% accurate for white people and 75% accurate for black people (numbers chosen arbitrarily), you're going to target a lot more black people incorrectly even if you're never incorrectly matching photos of white criminals to black people.
> It doesn't imply that it's matching black people to white perpetrators. The claim is that A) the model itself is worse at matching for black faces and B) the database being searched against is often disproportionately made up of black faces.
Right, I understand that in the context of this specific quote, but the article implies that claim.
> Give it a photo of a black person to search on and you're probably getting a black person as a match, but the likelihood that it's actually the same person is lower than it would be if you were searching for a white person.
Lower, but by how much? The number given here is six in all. It feels very premature to use probably in that sentence. (Edit: misread that as you’re probably going to get a match)
> The quote doesn't say it's increasing arrest rates for black people, but arrest rates for innocent black people.
I meant this quote from the article: “facial recognition leads police departments to arrest Black people at disproportionately high rates.”
But I agree. It seems that there is a disparity in accuracy, it’s very unclear on how much of one but so far it appears that we’re talking about a fraction of a percent. We only have a sample size of six to draw on. We don’t know the demographics of the districts this has been employed in, and it seems strange to assume that they’re the same as the American population at large. I mean the first example is from Detroit.
The article posted to HN in this relevant section for the start of this thread (the part about more/less black people in the data sets) quotes/paraphrases a Scientific American piece (where I got the quote with "innocent" in it from my comment), which itself is based on a paper in Government Information Quarterly.
The paper is what the article here links to when they say that facial recognition leads to disproportionate arrests of black people, the part you're mentioning now. That's a separate finding of the paper from the statements about possible reasons "why" that are based on the training and search sets.
The main thrust of the paper is actually those numbers: they find that black-white arrest disparity is higher in jurisdictions that use facial recognition.
"FRT deployment exerts opposite effects on the underlying race-specific arrest rates – a pattern observed across all arrest outcomes. LEAs using FRT had 55% (B = 1.55) significantly higher Black arrest rates and 22% lower White arrest rates (B = 0.78) than those not implementing this technology."
They do some stuff I'm not really qualified to opine on to try to control for the fact that obviously facial recognition adoption is also correlated to department size, budget, crime rate and things like that. Of course the usual caveats still apply, particularly that they're not claiming or attempting to show causation.
This doesn't rescue their claim. If the suggested class imbalance really exists in the training/test sets, the model will preferentially identify whites as criminals.
The claim is that the model is worse at telling black faces apart from each other.
The system is trained to match images of faces, not identify criminals; it's not comparing things to its training set to give a "criminality" score. The training data is just what has taught the system how to extract features to compare. You run an image of an unknown person against your database of known images, and look for a match so you can identify the unknown person.
If the model is just "worse at" black people, it's going to make more mistakes matching to them.
When this software is being sold to these departments, it's amazing that people in the chain don't seem to be talking enough about the training set used or performance on certain populations. If you are going to arrest or build a case on facial recognition, you would think that they would be prepared to defend its accuracy against a broad range of demographics. Embarrassing failures and mistaken arrests, hurts their program, not to mention the money the city losses in lawsuits.
The answer to this conundrum might be that neither the departments nor the vendors are particularly interested in avoiding bias. Paying lip service is generally sufficient.
It makes sense to me? The algorithm specialises in distinguishing between the faces in its training set. It works by dimensionality reduction. If there aren't many black faces there it can just dedicate a few of its dimensions to "distinguishing black face features".
Then if you give it a task that only contains black faces, most of the dimensions will go unused.
Are black faces overrepresented or underrepresented? According to AI researchers, we're faced with Schrodinger's Mugshot--there's simultaneously too many and too few!
It's phrased accurately if confusingly. The bigger and un-fixable problem is that people are more apt to believe that a computer has calculated the correct answer when by its very nature popping oft bad images into a facial recognition search is almost always going to produce results even if most are fake and the real ID may not even be among the results.
Without additional leads police are strongly incentivized to pick one of the results and run with it and in many cases with enough data you have enough to get a plea or conviction even if they didn't do it especially if the person so selected was in the database in the first place because they have a record.
Convictions/pleas are obtained all the time with similar levels of proof.
This is fundamentally the same problem as dragnet searches of phone GPS to see who was in a space in a range of time. It could be a valuable investigative tool but its also a great way to "solve" a crime by finding someone to pin it on.
Because models are trained and validated on real data. Given a training set of crimes and corresponding surveillance footage, arrestee info is a (not noisy) label for “who is the guy in the movie.”
With a moment's thought, even the most emotive amongst us should see that the mugshots will be part of the training set--the photographed individuals are, after all, the class of true positives.
You train a model on a bunch of photos of white people, and a few photos of black people.
You then deploy that model, and use the model to match black person detained by racist officers against a database of photos that the police have from before. In that database the majority of people are black.
Shitty AI that was not properly taught what black people look like because most of the people in the training data were white, says that it found a probable match for detained black person.
Racist officers do not attempt to second guess the computer, so they throw innocent black person into their car and drive off to the police station.
Come on, we know that there is variation, sometimes drastic, between populations on all different facets of life. But this one? No. It would be racist to even broach the subject. That’s why we know White people are to blame for it.
"We used facial recognition to identify people to arrest" is a statement that belongs in dystopian sci-fi, not reality. False positives, while the worst part, aren't the only issue.
This is the microcosm of the biggest problem facing humanity
Of course this is the result, because our society can only create data that is embedded with our personal and social structural biases and fears.
There is no way to possibly align artificial intelligence because we are only demonstrating anti-social biased actions via the data we create, it’s just a simple fact, there’s no way around it, there’s no pathway to fix it;
As long as we are a selfish antisocial society, creating antisocial data, we’re going to make antisocial biased machines
This illustrates a serious problem with AI. We (Humans) are ceding our autonomy to AI without any evidence of the validity of the outcomes. The industry puts up a front saying they will be careful, but the reality is that they are faking it until they make it. Techies want to fix the problem by adding more data points or tweaking the algos, but the real fix will take actual structural societal changes which is time consuming and expensive. Technology alone will not fix this.
> The research also demonstrated that Black people are overrepresented in databases of mugshots, and that skews AI.
> "Consequently AI is more likely to mark Black faces as criminal, leading to the targeting and arresting of innocent Black people," they wrote.
Um, no. This is not a system that turns a picture into a criminality score. This is a system that takes one picture as input and tries to search its database for other pictures of the same person.
Exactly there was a Netflix doco a while back showing the struggle of black people to get recognised by facial recognition. I couldn't help but feel that they were advocating for their own oppression and the conspiracy theorist in me wondered if it was a ploy by some three letter agency.
The facial ID standards should move towards adding a 3D scan of the face (especially now that biometrics are digitalized and stored on chips on ID cards and passports). That would solve this kind of problems.
Neuronal networks could then just add that in their training and be accurate again, and surveillance systems could use a complementary lidar.
"Porcha Woodruff — a mother from Detroit, Michigan — became the first woman to report that police falsely identified her as a suspect using faulty facial recognition, The New York Times reported."
As explained in the NYTimes article (yet omitted by BI), the victim picked her photo out of a line-up.
"Five days after the carjacking, the police report said, the detective assigned to the case asked the victim to look at the mug shots of six Black women, commonly called a “six-pack photo lineup.” Ms. Woodruff’s photo was among them. He identified Ms. Woodruff as the woman he had been with. That was the basis for her arrest, according to the police report."
It was facial recognition tech of some sort that put her into the six-pack:
"The ordeal started with an automated facial recognition search, according to an investigator’s report from the Detroit Police Department."
"A detective with the police department’s commercial auto theft unit got the surveillance video from the BP gas station, the police report said, and asked a crime analyst at the department to run a facial recognition search on the woman."
I wonder if problems like this would be drastically reduced if there was an instant, automatic compensation offered in any case of mistaken arrest, like $5,000 or something.
Contrast helps. But running a convolution model on a single video frame, or aggregate of frames, is not how human vision works either. Give a human a glob of pixels with no context and they will struggle to identify it too. Identification involves motion and knowledge of the scene.
> African Americans commit about half the murders in the US
not disagreeing, but you've also committed the statistical cardinal sin of survivorship bias here as well. It's not half the murders, it's half the _prosecuted_ murders. You don't know how many are not caught or known.
And you left out another major issue. It's half of the homicides convicted as murders. People pleading guilty to negligent homicide or claiming self defense (or simply winning their case) aren't included. Which are outcomes associated with good lawyers, which is associated with wealth, which is highly correlated to race.
The clearance rate for black victims of murder in black neighborhoods (where the perpetrator is likely to also be black) is the lowest out of all categories, which makes your point actually work against you
Unknown would be its own category, which means some of those unknown might actually make the African American stats even worse if they were attributed.
And it is imho better for 9 murderers to walk free and prevent 1 wrongful conviction, than to have 1 wrongful conviction to keep 9 murderers off the streets.
Did you really just post an inflammatory, unsourced, incomplete statistic, then when someone points out that all the other correlations you didn't, you say "who cares" ?
He was talking about the cause. I never mentioned the cause. I said which group. I never said what causes that group to do it. It surely isn't wealth or education.. eastern Europe has been poor and no one kills for a pair of fancy shoes.
He was talking about the cause. I never mentioned the cause.
This is a lie, you insinuated the cause and now are pretending you didn't. This is a common dog whistle by extremists.
I never said what causes that group to do it. It surely isn't wealth or education
Now you are directly saying it and basing it on nothing. Being a poorer country overall has nothing to do with anything. There are many other countries that contradict your narrative.
no one kills for a pair of fancy shoes
This is another racist dog whistle and has nothing to do with reality.
If you aren't doing these things on purpose you should reevaluate what you accept as true and look into these assumptions you hold.
Have you considered actually answering why poor polish people didn't kill each other for shoes, instead of saying it's a dog whistle to mention it? You don't add anything besides labeling things randomly. What you're doing doesn't work anymore.
Nobody kills each other for shoes. This is a racist dog whistle you are repeating over and over. It is not backed up by anything. You wrote it multiple times and you haven't even explained why you think that's true.
It isn't someone else's job to prove your claims for you. You don't get to say something outlandish then get mad at the other person for 'not googling it'.
You searching for something specific and finding it has nothing to do with statistics. You moved to finding specific news articles with the title you wanted. That would be like me saying that a lot of polish people are serial killers:
I would never blame it ON the skin color. That would make zero sense.i don't get more prone to murder when I get a tan. But you can't say it's just from being poor. Poor Eastern Europeans don't murder each other for a pain of shoes. Something is going on in the culture of the African American community. I'm not sure what happened in the last 50+ years, but home ownership and marriage rates are WAY down in the African American community compared to 50 years ago and it's causing a lot of problems.
Edit: I can't even believe you said I blamed it on skin color. I talked about what the group is doing, not what is making the group do it. Amazing people miss this
> I can't even believe you said I blamed it on skin color
Unless you believe yourself to speak for America, I very clearly didn't say that.
> I'm not sure what happened in the last 50+ years,
Institutional racism, drug wars, three strike laws, mandatory minimum sentencing, relentless propaganda and demonization, underfunded schools, leaders smeared and assassinated, etc etc etc.
And now you, you specifically, are blaming it on "their culture" and acting like it's a big mystery. Ugh.
All of those things plausibly lead to cultural differences. How else do you get from those things to “elevated homicide rates” without traversing “culture”? Feels like you’re violently agreeing.
I'd call it 'violence' when you assassinate leaders, throw fathers in jail for non violent drug offenses as deliberate racist policy, etc, and then stir people up against those with darker skin tone because of "math" that was stripped of all context.
This thread is giving me a headache. Feels like when thedonald would raid subs back in 2016.
For what it’s worth, “violent agreement” is a specific concept—it just means using an argumentative tone when they seem to be agreeing. I’m not actually suggesting you were behaving violently.
I agree that your additional context is important, but it has opened almost every single conversation about race in America for the last 10-30 years. At a certain point, it would be good if we could advanced beyond it because absent the verboten cultural aspect it’s difficult to imagine how these things (particularly those which were officially ended decades ago, like the overwhelming majority of our racist policies) are manifesting as population-level differences in behavior apart from culture. For what it’s worth, we should absolutely release nonviolent drug offenders and repeal any remaining racist laws, but I don’t think anyone is expecting those things alone to solve crime in black communities. Refusing to look at it and shouting “racist” at anyone who does certainly hasn’t worked so far.
> At a certain point, it would be good if we could advanced beyond it
It sure would; but if you can't call out literal white supremacist talking points then that's gonna be pretty hard to do.
> Refusing to look at it and shouting “racist” at anyone who does certainly hasn’t worked so far.
I didn't do that, so I don't know why you're bringing it up. There are literal racists toxicing up this thread, deliberately, so it's weird you'd focus your debate on the people adding the context which "has opened almost every single conversation about race in America for the last 10-30 years" yet is still somehow necessary to say.
> I didn't do that, so I don't know why you're bringing it up
I'm not trying to mischaracterize you, but I'm pretty sure you used "literal white supremacist talking point" to refer to the parent's suggestion that culture seems to play a role in disparities.
> so it's weird you'd focus your debate on the people adding the context ... yet is still somehow necessary to say.
My debate wasn't focused on you adding this context, my argument is that critiquing culture is a valid thing to do. I never argued that you oughtn't add context, only that omitting the context (as it is common sense) isn't proof of racism in the same way that not opening a conversation with a treatise on gravity doesn't make one 'anti-science'. I'm trying to read and respond to your comments carefully so as to not accidentally argue against straw men--please afford me the same courtesy.
Got to be honest, HN is the last place I expected to read the 12% line in. Hope this triggers some moderation inquiries to make sure it does not evolve into something worse.
This community's politics competency is pretty all over the map.
Competency in technical areas and competency in politics or history aren't strongly correlated. Watching the technorati running the FAANG-tier companies of the world fall on their faces over and over on these topics is a strong indicator that it isn't an issue limited to this community.
You're calling people white supremacists just because they make some of the same points white supremacists do. White supremacists also love orang juice! Are you one also? Why do you drink fruit juice???
I hope each of these facial recognition matches is reviewed by someone of the same race / skin color as the alleged perp. Often times to someone of a different race it can be hard to see differences that are obvious to those of the same race. I have heard both black and white people say "They all look the same to me" regarding people of the other race. I am sure the same applies to Asians.
As a white person who grew up as a minority in my neighborhood, I have trouble distinguishing between white people with the same haircut, a problem I never seem to have with people of color. I don't think the race of a person is what matters, so much as the environment their brains were trained on.
Oh man its nothing more validating than watching people of the same race mistake one person in their friend group for another, after being casually called racist for doing the same thing towards the same two individuals
Some races really do look more similar to each other than others. In America there is a high degree of mixed genes, that you don’t find in a more homogenous genetic pool such as Japan or places in Africa.
It’s just to your untrained eyes that they look “more similar”. No, all Asians do not look alike. Also, contrary to popular belief, they do age. And blacks do “crack”. They just have different patterns of progressions, and your brain is not trained to notice the nuances.
I am almost certain Asians (not Indians etc) look more alike than other races. Sure, there may be differences that let you identify an individual, but at a distance of say 20 feet, can you resolve those differences and identify who you’re looking at? Clothes and hair styles might help, but if not available can you simply identify a face? You’d need a close look and a lot of practice.
What about if they wear a mask? When an Asian wears a mask, they could almost be anybody. Asians are even more androgynous than other races too so it’s possible to not even know if they are male or female without further context, typically hair and body shape.
I’m not even sure what a “typical” white or black person looks like, but the typical Asian always seems to be straight black hair and certain shapes of eyes and light skin tone. Am I crazy or is this accurate?
There is great variety in "Asian" look. A Vietnamese does not look like a Japanese, who does not look anything like a Filipino, who in turn, does not look like an Eskimo. They are all "Asians"—they have the typical "Asian" features—but look very differently, just as a Scandinavian looks "similar" to a Bulgarian, but distinct enough.
That's a silly argument. Go to Norway (or elsewhere in Scandinavia)—there is "objectively less phenotypical variety there" too; would you extrapolate that all caucasians look alike?
Given the The Media only report on mistaken arrests when the victim is black, it makes sense that every victim The Media reported on was black. It's called a self selecting sample.
As far back as 1985 Brian Winston wrote[1], "It is one such expression – color film that more readily photographs Caucasians than other human types – that is our concern in this piece." Richard Dyer wrote in 1997, "The aesthetic technology of photography, as it has been invented, refined, and elaborated, and the dominant uses of that technology, as they have become fixed and naturalised, assume and privilege the white subject."[2].
Key findings in the bias of facial recognition come from a 2018 paper in Proceedings of Machine Learning Research which found, "We evaluate 3 commercial gender classification systems using our dataset and show that darker-skinned females are the most misclassified group (with error rates of up to 34.7%). The maximum error rate for lighter-skinned males is 0.8%. The substantial disparities in the accuracy of classifying darker females, lighter females, darker males, and lighter males in gender classification systems require urgent attention if commercial companies are to build genuinely fair, transparent and accountable facial analysis algorithms."