Hacker News new | past | comments | ask | show | jobs | submit | simbaz's comments login

The cameras don’t watch over, just feel and tell… When the video is needed indeed, but not always. So can we make a camera which is just blind and still tell things in other ways: how long have I studied, how often I do the work-out, etc…

Here is the app goes online on Android: https://play.google.com/store/apps/details?id=org.sharpai.ai... (Note: don’t try to search on Google play, the keyword “blind camera” just doesn’t work, click on the above link is the way to download the app on Google play, if you still like it after reading this article. ) The first screen after installed and lunched the “Blind Camera” app. It is a very simple GUI, the only option you can choose is the front or rear camera. (don’t worry about it, the camera doesn't see the image anymore, it processes the data that came out from sensors and throws it away immediately after processing it. Never appear to other apps or go out from the devices.) After sitting the mobile on my desk for a while, a green bar appears on the bottom under 12:00. This indicates how long I have been “Here” for writing this article, at 12:00 pm, California, Silicon Valley. While staying longer, there is a flame that comes from the lamp. Flame is bigger while staying longer.

Last sentence: the team is working on an incoming “teachable” version, which will allow the end-user to tech the “Blind Camera” to recognize “action” blindly, the “teaching” should be also very easy: sitting and working.


Hi, Mattlondon, it's really nice to have your reply. You are asking a good question, the 'Mali GPU' supporting is not for RPi, it's on the board which has 'Mali GPU' as RK3288/3399 or others (MTK/....). For now, the reference code is running on RPi cpu, a light MXNET model do the extracting of face feature, the detection code is also running on RPi, the classifier is on RPi too. The API server is the APP logic server, AWS/MINIO is storage layer. The light model is https://github.com/deepinsight/insightface/wiki/Model-Zoo#34...

which running MobileNet as backbone. IF running on Mali GPU, ResNet50 can be used. Inference duration depends on the GPU power. if running on Mali 720 MP2, should be something close to 0.3s.


yes, it would be really nice if run on Jetson Nano with GPU accelerator...


Privacy is the most important issue for the AI surveillance camera. Use open source and BYOD server is the only way to solve this problem. SharpAI DeepCamera provides private cloud architecture to save all your information on your own storage.


> Use open source and BYOD server is the only way to solve this problem.

I don't think technology is the solution here. In fact, it is much more the problem here – an abundance of surveillance technology projects like this one will only result in more surveillance, not less. The average Joe doesn't care about the technology stack the surveillance camera[0] on the subway is running on. Instead, the mere presence of the camera will cause him to have a feeling of being watched and, thus, to behave differently.

The solution can only be a political one: Restrict the usage of AI in surveillance technology.

[0]: aka "security cameras", as they euphemistically called by their manufacturers.


1. No production implemented the feature we open sourced. This is unique. 2. Whole thing(AI Inference/Classifier training) are offline. 3. ARM did it. 4. We have classifier training on the embedded system. 5. I calc the theory/real, arm is better than Nvidia GPU.


Three years ago, we decided to develop production on the hottest area: machine learning/deep learning. We tried several cloud option, but can't make real production due to cost and other limitation.

So, we start to develop this production: DeepCamera.

Specify to one hardware(camera), go deep inside for porting would cost half of year based on our strong embedded linux(router) experiment. So, this is the best way we figured out: software on set-up-box or Android(tablet/mobile).

Also, we strongly believe AutoML is the only way for AI production to success. Users should be able to teach/train AI(Model) simply just like chatting. So we developed the application as wechat/whatsapp. People start to talk to machine to make them clever. When they recognized people wrong, just rename it, machine will train its brain to remember it.

After deployment in one of the largest industry leading data center to protect their security, we finally getting to our final target: open source.

This is the way a platform likes Android could born on AI. This is our dream, Android on AI, that's SharpAI. With your help, we will success.


Thanks a lot for your tips, I'll follow 'Show HN' to show the project.


Burn AI image onto RK3399 HMAX box. Connect box to DaHua Network camera.

Then it will push you familiar faces or unknown faces notification to you mobile.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: