Hacker News new | past | comments | ask | show | jobs | submit login
Manual for a popular facial recognition tool shows how much the software tracks (themarkup.org)
121 points by type0 on July 7, 2021 | hide | past | favorite | 77 comments



I thought this was one of the more important points in the article.

> The Santa Fe Independent School District’s neighbor, the Texas City Independent School District, also purchased AnyVision as a protective measure against school shootings. It has since been used in attempts to identify a kid who had been licking a neighborhood surveillance camera, to kick out an expelled student from his sister’s graduation, and to ban a woman from showing up on school grounds after an argument with the district’s head of security, according to WIRED. > “The mission creep issue is a real concern when you initially build out a system to find that one person who’s been suspended and is incredibly dangerous, and all of a sudden you’ve enrolled all student photos and can track them wherever they go,” Clare Garvie, a senior associate at the Georgetown University Law Center’s Center on Privacy & Technology, said. “You’ve built a system that’s essentially like putting an ankle monitor on all your kids.”

As a society I think we need to be concerned about what happens when this technology becomes more wide spread and normalized. I’m not holding my breath though.


Covid masks + sunglasses were a great thing for surveillance.

Otherwise, "think of the children" will always prevail in western cultures, and parents will sign everything just to protect their kids, even from nonexisting harm... sadly.


I work in FR. The first PR move when covid hit was to publish videos demonstrating our accuracy with both a medical mask and sunglasses on. The technology still works, with a less than 8% drop in accuracy for single images, and less than 4% drop with motion video.


And honestly, covid just created a larger dataset of people in the wild with masks and sunglasses. In all sorts of lighting and settings.


What is the alleged accuracy before the drop, and under what circumstances?


Facts: https://cyberextruder.com/aureus-insight/ To be transparent, I stopped working for them in April.


I assume it's looking at more than faces then? Gait, earlobes, etc.


Face, head, ears & shoulders, with full 3d reconstruction and refinement via motion video. Np gait, too easily gamed.


Any suggestions to lower the %? Small prosthetics under mask for nose/cheeks? Does hair bulk affect head shape? Any suggestion for ears? Shoulder pads? I guess there's not a great way to slim features, only bulk, which may be accounted for?

Is this currently mainly matching people with their past self's model, or is this also integrated with data connecting us to our identities (for the average suburbanite who doesn't visit locations which scan IDs frequently)


Can't say much without revealing trade secrets. However, two things one can do to prevent identification via any FR: 1) place a bright LED somewhere near your face, such as on eyeglass frames or the brim of a hat. That bright LED causes a hot spot in the camera frame with an area of glow large enough to obscure one's face. Additionally, many cameras have a physical mechanism that closes an aperture to reduce the light received - that takes a bit less than a second; if the LED blinks at a rate near that mechanism's adjustment time, it will never adjust in time and the entire frame will always be too bright or too dark in alternation with the LED blink.

The second thing one can do is draw or place stickers on your face with additional eyes, noses, and/or mouths on your face. Similar to the fantasy/art makeup look where people paint an additional set of eyes on their forehead - the FR will see 6 eyes or 3 mouths and will reject that as being a human face.


I hope they don't go away. Before 2020 it was actually a crime (I can't remember what degree) to wear a mask here without a doctors note that included an expiration date.


This comment would have more information content if the location of 'here' was specified.


If that helps, here's a global overview: https://www.wikiwand.com/en/Anti-mask_law

A lot of the "rich" world had anti-mask legislation, and failing the presence of legislation a willingness to put pressure on their citizens to be "identifiable" in public at all times (where I am in Quebec some attempts being repelled were still news as of 2017 iirc)


Most of the southern US had laws like this.


When it becomes normalized? It's already teriffyingly normalized.


Do you have some specific examples you can share?

No examples, outside of social media and authentication, come to mind for the US. I’m probably out of the loop on this one.

Edit: I did find mention of facial recognition being used for 7-Eleven’s in Australia and trialing of the technology at Target and Walmart in the US [0].

> [7-Eleven] confirmed that the data captured by the facial recognition software that is being introduced nationally, will be used to verify customer feedback rather than as a measure to prevent theft – in the US, Target and Walmart among others have experimented with facial recognition to combat shoplifting and fraud.

The article also had this at the end.

> Facebook also submitted a patent for in-store facial recognition tech that would provide retail staff with customer information drawn from social media profiles in a bid to deliver a more personalized service.

Edit: And this ACLU article [2] from 2018 implies many large US retailers are using it in store as well without notifying customers.

Edit: Oh and schools as mentioned in this article.

On a related but maybe ironic note SF and some surrounding municipalities have suspended since mid 2019 the use of facial recognition by government departments (with the exception of ports) [1]. So at least there is that. However private companies and people can do as they may.

> In the Bay Area alone, Berkeley, Oakland, Palo Alto and Santa Clara County (of which Palo Alto is a part) have passed their own surveillance-technology laws. Oakland is also currently considering whether to ban the use of facial-recognition technology.

> The ordinance… outlaws the use of facial-recognition technology by police and other government departments

> However, the ordinance carves out an exception for federally controlled facilities at San Francisco International Airport and the Port of San Francisco.

> The ordinance doesn’t prevent businesses or residents from using facial recognition or surveillance technology in general — such as on their own security cameras

0. https://techwireasia.com/2020/07/why-7-eleven-australia-is-u...

1. https://www.cnn.com/2019/05/14/tech/san-francisco-facial-rec...

2. https://www.aclu.org/blog/privacy-technology/surveillance-te...


Sounds like if anything we can chill out. Or was your point not to highlight that the technology is even useful beyond just catching school shooters? Seriously, this mission creep concern is just preaching to the choir for easy attention. No one who was fine with surveillance tech before is going to change their mind after hearing it turned out to be more helpful than imagined at solving problems.


This technology will inevitably be abused to do terrible things to these kids. I have absolutely no doubt about it.


Lots of fodder in this article, but this one stood out to me:

"The school district originally purchased AnyVision after a mass shooting in 2018, with hopes that the technology would prevent another tragedy."

I'm curious how facial recognition would prevent a mass shooting. Is there some database of faceprints of known future mass shooters?


It's a convenient excuse. Facial recognition against school shooters is the high school counterpart of email interception against child porn.


You don't. All it does is help track their location, after you have identified them. Unless we want to turn back to using phrenology...


I'll bet an image processor that can recognize guns and gunfire and dead people would be even better.

I suspect that people are drawn to "The Bourne Supremacy" look and feel of their command center, just behind the parent-teacher conference whiteboard.


Modern systems have a lot of problems and you get a lot of false positives. But honestly let's think about it more. If you can walk through a crowd with a visible gun, it's pretty unlikely that you'll be able to identify one with a camera. So if you only see the guns when they are drawn, well I'm not convinced this is meaningfully different because now the gun is being announced. I have serious reservations about using gait analysis for determining carrying weapons.

I do want to draw attention to Destin from Smarter Every Day's gun detection system[0], where he even mentions part of the above situation. Maybe this could accelerate police response and automate it, but this honestly sounds like an over engineered system.

[0] https://www.youtube.com/watch?v=Lh0x54GC1sw


I just want to add this to the discussion as well. From 2017, it is an adversarial 3d printed turtle that some recognizers detect as a gun:

https://www.theverge.com/2017/11/2/16597276/google-ai-image-...

Its dark, but I can envision someone giving a version of this bad boy to someone they would like to prank. Kids will be kids...

There is no need for this razzle-dazzle anyhow.

The more ethical solution is to add a beacon to the id card or bracelet or whatever you need to move around the school, like an EZ-Pass. You can use a shielding bag or turn them off or leave them at home when you aren't at school.

You can use human presence sensors to identify people in general and where they aren't. When theres another tragic shooting, you can failsafe open the maglocks or access controls except where people are taking shelter, and either track the person directly or track where there is a person without a beacon.

Then, give cops the regular camera feed for situational awareness.


I would add that even IF the technology allowed to "scan" a crowd and actually detect instantly and reliably a (visible) gun, what "remedies" do you have handy?

I mean, it is not like at any school there is (or can be) a sort of SWAT team on alert and capable of an intervention in - say - 5 minutes.

And - cannot say on the specific 2018 shooting at that school - from what I understand the typical sequence is:

1) someone enters a school (or church, etc.) with a gun

2) as soon as a suitable target is within firing range, the shooting starts

and there is no more than 1, 2 or 3 minutes between #1 and #2 above, if the detection of the gun happens before (let's say outside when the shooter is approaching the building) you can maybe have a couple minutes more time.

And - even if you actually had this fast intervention armed team ready - can you only imagine the risks of false alarms/incidents with perfectly innocent people, it cannot possibly work.

Besides, there is no real need for facial recognition in this scenario, i.e. anyone, no matter who he/she is, should be detected if carrying a gun.


I agree there's no need for facial recognition in this scenario and I'm not an advocate for its use, but think of how many times you can point a gun at someone and pull the trigger in 30 seconds. With each shot potentially representing a life, a couple minutes really would matter.


Sure, but what is "normal" or "average" intervention time from the moment the police is alarmed?

I would guess something between 10 and 15 minutes at the very least (not counting the hypothesis of a local presence of policemen or military personnel).

If what you want to prevent is that kind of unconditional mass shooting this doen't seem to be effective, by the time the police (early alerted by the automatic system) arrives, the shooter will very likely have fired all the ammunitions and already killed all the people that happened to be at range.


It doesn't prevent all mass shootings, no. It may improve police response times, or if there is an officer at the school it may alert him. Improving police reaction time may save lives.

It could also sound a warning for the school so that students could preemptively run or hide. Imagine this system hooked into the lighting and lights in the immediate vicinity of the shooter could turn a dangerous red, yellow nearby and green along routes that would escape the shooter. Students could evacuate more easily if they could trust that system. Another innovation might be strobe lights that could focus on the attacker to disorient and distract him.

I think there's a huge responsibility to such technology and much danger to it, but there are potential uses.


>Improving police reaction time may save lives.

Yes, but the reaction time is "after" the alarm, the point I was trying to make is that this surveillance/weapon recognition approach may trigger the alarm earlier (of a couple minutes, maybe 5) but if the reaction time is in the tens of minutes range, it would make no particular difference in the actual outcome as all the shooting will happen anyway before the police can arrive.

About the trust on the system, it's tricky business, as always, the amount of false positives need to be reduced to 0 or next to 0 to gain trust, and it is very unlikely, besides the errors of the system, like innocent maintenance people - say - plumbers or electricians carrying pipes or tubes, or some power drills, etc. (let alone carpenters with nail guns) I would bet that the new rage among the school kids would be finding perfectly legal items (let's say for the sake of reasoning antennas, cactuses, umbrellas) that would triggger the alarm, just for the fun of it.

Good point about alternative "distracting" means (lighting colours, strobe lights etc.) but again, when (hypothetically) such a system will be installed I doubt that it can avoid daily, weekly or monthly false alarms and after a given number of them happens, usually the system is disabled or - if possible - set to very low (ineffective in practice) sensitiveness.


A big reason that the false positive rate needs to be around zero is actually how uncommon school shootings are. Just a few false positives erode trust in the system and make people turn it off.

The problem is black swan events[0] are really hard to predict

[0] https://en.wikipedia.org/wiki/Black_swan_theory


I believe the questionable rationale there is that it helps to identify suspended students whose presence is assumed to be dangerous.

“The mission creep issue is a real concern when you initially build out a system to find that one person who’s been suspended and is incredibly dangerous, and all of a sudden you’ve enrolled all student photos and can track them wherever they go[.]”


You're not going to identify anyone unless you already have them in your system. Which opens up a whole other can of worms.


Yes, the stated original requirement to "find that one person who’s been suspended and is incredibly dangerous" does presuppose that the dangerous person is already in the system and you're just trying to find them.

I don't personally find the rationale persuasive, I'm just answering the question how it could possibly be considered useful.


The unpleasant kind of tracking is when a number of cameras keep looking for a particular face, and person's movements are tracked, e.g. across a city. This is not immediately bad, but has a high potential for abuse.

Still you should fully expect to be tracked in higher-security areas, like banks or airport security zones. Large expansive stores, likely too. If not by robots, then by human guards.

Privacy in a public place is the classical security through obscurity. Expect it to erode; preferably never assume it. If you read the famous cyberpunk novels as fantasy, it's now time to rethink them as a rather sober and real warning.


> Lawmakers, privacy advocates, and civil rights organizations have also pushed against facial recognition because of error rates that disproportionately hurt people of color.

When I was in China, I saw how many cameras there are everywhere. And how Chinese officials pass by high-speed trains with cameras.

For Chinese citizens, the state has trained facial recognition on millions of faces. They use it to detect people passing roads in red lights (they show names and faces in big screens)

But, I wonder what is the error rates for non-Chinese. Without enough samples I can be identified as any other person similar to me in skin color. Can I mistakenly be identified as some "person of interest" for the Chinese government? That was an scary though.

I see the usefulness of this technologies, but "pro-active" policing has high rates of false positives and many innocent people will be harassed or even worse just because a machine failure. All that without taking into account when the machines are right at persecuting opositors in dictatorial regimes.


Serious question: Does a path exist that doesn’t lead to authoritarian use of this type of technology?


Facial detection has a lot of uses, and facial re-id (fusing tracks based on face vectors within a session) have lots of uses. Facial recognition, or what should really be called facial identification, where media is converted to a numeric fingerprint to look up in a database, really does not have a lot of use-cases that give me the warm-fuzzies.

IMHO, The only reasonable use for this kind of biometric fingerprinting is creating biometric models for validating key public figures against deep fake videos. This approach uses motion biometrics, so it's not even face-ID, but it is a form of "compare vector against existing database" so I count it. This is also super specialized and only applies to figures with hours of video to train on.

Also my opinion, any system which does identification of the general population is going to inevitably be abused.

Face detection, eye tracking, emotional and sentiment analysis, etc, has many uses in the videoconferencing space. The vector data only exists in the scope of a session.

The part that makes it nasty and authoritarian is any time you build a database/index coupling identities (even pseudonymous) with features, whether it be face, gait, voice, from CNNs, HoG, or even just browser fingerprints, this is the dangerous bit, and where laws should focus their effort.

Re-id in public (where you vectorize and fuse tracks starting with first glimpse) is a grey area and probably should be very scope-limited, e.g. active warrant, specifically signed by a judge to authorize use looking for only that person. Even then, kinda icky.

Disclaimer: I work in the video emotional recognition field


As a good example, the two seemingly useful use cases (identifying an expelled student or a trespasser) could use this technology if deployed on the entrances and exits to a building. The more cameras you install, though, the more of a chance you’ll encounter a situation where mission creep occurs.

Cameras deployed all throughout schools with AI watching will inevitably lead to errors in misidentification, capture of information that isn’t pertinent, or misuse. At best the system errors too much, occasionally acting as a useful system to stop known trespassers. At worst, you send the police chasing down the wrong person, or you accidentally record something you wish you didn’t and the video is leaked.

I’m not quite sure how deployment of this system would lead to intended results, like stopping school shooters. It reminds me a lot of the Brussels airport that suggested having human “fire spotters” strategically stationed around the building as a substitute for a working and functional fire alarm system.


If that path exists, we missed a turn a few miles back.


Arguably yes, but we need to put in the effort to make it secure instead of outright banning it.


Why not outright ban it?


Cause it can be incredibly useful in solving a multitude of problems plaguing society, without making it a dystopian hell.


Like for instance? And please don’t say something related to guns, because that’s is just treating the symptoms and not the real problem.


Spousal abuse, child abuse, bullying, shoplifting, restraining orders, any number of things. Automated surveillance and facial recognition have so many potential uses in solving the worlds ills that I am frankly baffled that we aren't trying to figure out how to use it "safely". Instead we're promoting fear propaganda and beating the drum of "privacy" without really thinking about it or justifying it.


1. Is not about privacy just because. Is about avoiding dystopian 1984-like repressive societies. Power corrupts. Give that power to a government and they will 100% use it to keep them in power and justify it with good intentions.

2. Could you please elaborate your first examples? I can’t see how that could work without (1)


I was more asking for examples of what AI solutions have achieved that is unambiguously good.


It's hard to find examples because of all the fear-propaganda surrounding facial-recognition and automated policing. But here are a few:

https://www.theregister.com/2021/07/01/seoul_ai_bridge_rescu...

https://techcrunch.com/2017/11/27/facebook-ai-suicide-preven...


Recurring pandemics where we're all wearing masks?


I work in vision and I still have not heard a compelling case for Facial Recognition. There's useful things like iPhone unlock (similarly applied other places) and identification systems. But the potential for abuse is just far too high in my mind. They say don't give powers to Mr. Rodgers that you wouldn't want Hitler to inherit. This is one of those powers. The good is things we can still do easily with other systems. The bad is a 1984 dystopia (also see the recent HN article about Tencent[0]. Why is no one here concerned about cameras scanning children... That's just inappropriate pictures waiting to be leaked). Crime has been going down. Terrorism is extremely rare. Pareto exists.

Like the upside here is if Mr. Rogers is in power you can unlock your iPhone and (if everything works perfectly... good luck?) you can get on an airplane ~1 minutes faster. But if Hitler is in power you have modern Phrenology (which we're already seeing).

What am I missing here? What is the great use case that warrants this?

[0] https://news.ycombinator.com/item?id=27755853


Humans seem to have to try out every bad idea at least once, usually more than once.

I hate to be pessimistic, but I don't see how it doesn't become ubiquitous very soon, with all the downsides ubiquitous surveillance are going to lead to.


It’s even much simpler : bigshot from security company visits director from school. Director feels important, bigshot can make coin. Both make a case to improve security (how can you turn that down) and a lot of networking and project is being done so People Look Busy.

A lot of it is just buzz and network effect with little long term thought going into it.


When this stuff fully blooms it won't have a "downside", because that would imply some other thing of which that downside is a side. It will kill any resemblance of humans, and enabling it, nay, failing to fight it tooth and nail, is ultimately more heinous than all the crimes it might prevent combined.

I don't care to speculate about outcomes. Optimistic or pessimistic, it always presumes a lack of agency. I just can't bring myself comment like a bystander or a fortune teller on something that is still ongoing and that I, like all people here, am involved in.


I guess what you’re missing is that in either case there’s a juicy Total Addressable Market. Sad, but there’s enough greedy talent and capital that your calculus doesn’t matter.


While I don't disagree with you, I think we can have better conversations on HN. Honestly I would love to hear from someone with a different opinion than me. I really do want to hear the counter argument. But this is also the technology that made Joseph Redmon[0] (creator of YOLO) leave vision research. We can have ethical discussions. We should have ethical discussions. And different opinions are how we learn. And I do think HN is one of those places to have these ethical conversations.

[0] https://pjreddie.com/


I hope you find a satiating interlocutor, but I think the most obvious explanation is probably the most correct one in this case. I know this theory doesn't exactly satiate one's intellectual curiosity, but sometimes the truth is boring.


I mean I agree, but with how prevalent this is and how profitable it is, I would imagine there's someone selling the technology. Especially here. I really do want to hear their opinions and have mine challenged. Maybe there's something I don't see and I'm overly paranoid. Hopefully we'll see some dissenting opinion.


I suspect that this kind of technology is already in use by authoritarian regimes with more diverse and relaxed ethical standards when it comes to surveillance.


> Facial recognition technology has been widely implemented in contemporary China and has become an integral part of people's daily life. (2020)

https://www.taylorwessing.com/en/insights-and-events/insight...


Do some research on China and the Uyghurs and make your own call on if that is currently happening.


While that's prolific it isn't like we aren't doing it in other parts of the globe. Highlight China, but highlight Europe and the US too.


What I often wonder when reading things like this is how on earth does this gets approved in budgets. Education districts surely don't have money to burn and why would security be given extra money for something new?

Do these providers subsidise costs to normalise the use of their products in a sector, does the bogeyman of school shootings get rolled out for an easy sign off, or a bit of both?


If I sold this tech, I'd sell it at a loss or give it away for free to a few users, then make some paid astroturfing in neighboring districts with complaints that "security is better at school X that has installed system Y" and after that I'd expect orders from everyone.


They are using tactics that make it difficult to say no, such as pitching this as something that can prevent school shootings.

It's absurd but once you say "this can prevent a school shooting", who can say no to that?



> Facial recognition, purportedly AnyVision, is also being used by a supermarket chain in Spain to detect people with prior convictions or restraining orders and prevent them from entering 40 of its stores

We are heading into dangerous territory with this, similar to the "social credit score" system in place in other countries.

If this gets tied into other databases, such as criminal databases, I can see a scenario where people are ranked in real-time as potentially undesirable. Criminal convictions from years ago can now follow people around, putting security guards and store employees on "high alert" to watch people carefully, or simply deny them entrance.


Someone linked "little brother" in another thread, which is a fiction novel of a very similar situation. I found it a good read sprinkled with realistic descriptions of the tech & countermeasures.

https://www.gutenberg.org/files/30142/30142-h/30142-h.htm


Another israeli company


[flagged]


You certainly can't post anti-Semitic crap to HN. We've banned this account. The whole idea of "criticizing Jews" is bizarre—as if "Jews", or any large group, were a thing.


"The public records The Markup reviewed included a 2019 user guide for AnyVision’s software called “Better Tomorrow.”"

well thats ominous.


Actual title:

>> This Manual for a Popular Facial Recognition Tool Shows Just How Much the Software Tracks People

Ugh. After seeing that I'm really not sure this is an article I want to read. Why do people write titles like that? It just makes it very hard to trust anything written in an article.

Thanks type0 for making the title less awful, btw.


Why do we need facial recognition everywhere? We don't. Can we just stop building a sci-fi dystopian society?


A capitalist society can put forth a hundred of justifications for facial recognition.


That is an odd statement. Do you believe non-capitalist societies can't put forth a hundred justifications for facial recognition?

If so, how do you explain communist China's extensive surveillance / camera system? How about socialist countries in Europe?


China is a totalitarian dictatorship, with an otherwise pretty capitalist organization. No European country is socialist in the sense of social ownership of the means of production. How they describe themselves is irrelevant.


My point was that it's easy for capitalist society to justify it. I am not arguing that a non-capitalist society can't.


Well... Gotta go buy masks to hide from this. It's insane how normalized this is becoming and how dystopian we are becoming. And the trend is only accelerating.


I don't understand how parents in those schools don't go mental over this. Unless they don't know about it...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: