I can feel the pain here but in my own way. As a hiring manager it must be so frustrating to be assaulted with massive idiocy when you are looking for a candidate. Think about it:
- candidates have zero penalty for applying to something they know they aren't qualified for
- there's constant noise about getting one of those well paying "tech jobs"
- everyone who has ever logged hello world to the console thinks they are the best programmer in the world
- people are desperate and will lie
- the ratio of unqualified to qualified candidates must be 100 to 1
It must be like dying of thirst in the middle of the ocean. The one thing I love about this, though, is the "senior developer" who hasn't read a book in a decade and has to get a new job. ahahahahaha. The moment when that mountain of BS collapses underneath them must be ga-lorious.
Also though, I get the feeling it's a different job market than it was in 2010. I'm still trying to understand what I think I see.
Languages seem to be able to cover multiple platforms now. Programming I don't think is as technical as it once was. At the same time everything seems to be a mess right now. Everything's broken. There's a million and a half frameworks for everything. ...It's like programming as a field has become much more broad, while simultaneously lowering in quality, with knowledge that was once spent on technical mastery now being traded for either lower wages or domain knowledge. So if you went to school and got a degree in programming that was once pretty impressive but now you're just some dude who can "code".
Question, what is the job market like for someone who is good with graphics programming? I'm not the best at math but I have struggled my way through linear algebra enough that with some practice I think I could be good at graphics programming. It'd certainly be a nice change of pace from programming CRUD forms for businesses.
A few years ago I went deep into an effort to make my own 3D animation company, all in-house software and everything from the ground up as much as possible.
I wanted to make the next Pixar, went far with the idea. Built a lot of working prototypes for the software we'd use all by myself. I built something I was pleased with and sought for a seed round and everything. I certainly don't know everything but I know way more than my fair share in graphics programming.
That's stellar experience and even with that I can't find any relevant graphics opportunities. I keep knocking on Pixar's door and they never have any open roles for the things we're looking for, and they never bite when they're open.
I work at Adobe now. Even here I'm not touching graphics :)
I'd love to hear someone else's take, because all I can conclude is that it's a saturated industry, it's a primarily academic industry (I don't have a degree), or I'm outside the industry network.
I think it would be pretty good. Less employers needing graphics programming, but a lot less candidates. I've been trying to hire someone with some graphics programming experience and so far have had a lot of trouble finding candidates in SF.
Personally, I think it’s quite good, especially if you’re flexible about where you work. Getting practice and being good at enough linear algebra to understand graphics transforms thoroughly will help a lot. There are graphics jobs in web app developement, mobile/console/pc game studios, game engines, scientific computing and HPC, medicine, movie studios, third party software renderers, and lots and lots in the Bay Area giants like Facebook, Google, Apple, AMD & Nvidia.
Construction industry. I'm having an upcoming project with making a web based 3d editor now, and before that I had an open offer for another construction industry 3d app. I also have a close friend who just finished a gig in the same line
I thought the whole presentation was pretty cool. Those colonies are just visions of what might be built by future generations. He has new ideas about the future of industry and that's exciting. But I have a hard time seeing the scales he's talking about. Those colonies would not be able to hold billions of people. I'm not sure if moving industry off of Earth, especially for environmental reasons, makes a lot of sense. But when people imagine the future as being exactly like the present it's just tiring and stupid. I don't think he was acting like he knew all the answers but he's moving forward.
> But I have a hard time seeing the scales he's talking about. Those colonies would not be able to hold billions of people.
It entirely depends on how many of them you build. You could also achieve much higher densities than Earth with no real loss of quality of life, since you wouldn't necessarily build massive oceans or deserts or mountains. (Though I would really love to live on an O'Neill cylinder that was laid out as an archipelago!)
Facebook encrypted messaging! What's next, military intelligence? How about a vegan big-mac? Maybe a quality automobile by GM?
I think steganography is an excellent way to deliver encrypted messaging to consumers. It has so many inherent features that I'm surprised it isn't already widely used. Let's see:
- easy to recognize but hard to detect
- can pass through any channel that accepts images
- massive storage capacity (10MB+ depending on how you roll)
- encryption easily baked in!
- many additional use cases (store your kids ssc or passwords, store encrypted notes, anonymous communication by just posting an image online somewhere).
Everyone should know Facebook encryption is about as good as free (or maybe most) VPN encryption. But with steganography all you need is an open source application that you can trust or a popular codec.
If anyone is interested I have a stalled steganography project that I'm waiting to get back to (once I finish a ASP.NET Core book) https://github.com/smchughinfo/steganographyjr. I'm making it as easy to use as possible (UWP, iOS, Android, a website, Web API, Nuget, and possibly a native app for Debian if I get the time) Most of that work, though, you get for free with .NET Standard + Xamarin but it's still a lot of work.
Steganographic communication as a substitute for encrypted text is a baffling misinterpretation of the reason for encryption in a chat program. The use cases and potential userbase barely overlap at all.
I don’t want my conversations with my mother to be public. But we are not going to communicate in secret messages hidden in images as if we are espionage agents, and most assuredly 98% of the public will not, either. Not to mention that steganography has a security by obscurity aspect - the more you raise knowledge that textual messages may be concealed in images, and present a common mechanism for doing so, the less effective it is for escaping scrutiny.
Also, I’d note for your points that stegonography has no ‘storage capacity’. That’s a characteristic of the underlying medium. It is not a standalone communication system - if I’m sending secret spy image messages to my tow truck company instead of normal text messages, the storage is foremost limited by the text message system.
> Steganographic communication as a substitute for encrypted text is a baffling misinterpretation of the reason for encryption in a chat program.
I agree with you, but couldn't you say the same thing about using end-to-end encryption in a chat program as a substitute for messaging that's just encrypted in transit?
> Steganographic communication as a substitute for encrypted text is a baffling misinterpretation of the reason for encryption in a chat program.
> I agree with you, but couldn't you say the same thing about using end-to-end encryption in a chat program as a substitute for messaging that's just encrypted in transit?
I just want to point out, again, that this is not an argument that I tried to make.
But what are you saying people should do? Only communicate information that I don’t mind being public using traditional non-secure messaging systems, and use stegonagraphy whenever one wants to communicate private information?
Steganography+encryption has a number of use cases. The one I think is most interesting is being able to store encrypted data locally with ease. Right now if I want to encrypt some text I have a number of options.
I can encrypt the hard drive. I can encrypt a text file to a binary encrypted file. I can encrypt a text file to a text file with something like pgp. But none of those are what I would call user friendly. But through the magic of steganography you could do all that and save it to an image file. Now we have something that people might be comfortable using.
As for secure chat idk. I wouldn't trust Windows, iOS, Android, my ISP, my VPN, the NSA (and whoever else), the spyware my mom has installed on her computer that neither of us know about, etc. I'd probably just google for something but I wouldn't be under any illusion that it's totally secure.
Can you elaborate on the logic of why saving encrypted text to an image file is more user-friendly than saving it to a text file? Why would that make people more comfortable?
Because people are more comfortable dealing with image files than .enc files or whatever extension one might use. Plus you dont just have to encode text. You can encode any file type. Look, I don't know what this is to the various participants in this thread but to me it's been really sad. I feel like I'm arguing politics. I don't think I've said anything unduly disrespectful or even incorrect yet I've been arguing about this with people who apparently think they know better but consistently get basic facts wrong or appear to be disingenuous to help win a debate. I'm not here to connect every dot for you. You're not holding my ideas up to the light of truth or whatever you think you may be doing. I really regret logging on to hackernews today.
Sure, for a chat conversation you would want something faster than steganography. But if you will notice I did not propose a solution for encrypted chat. I proposed a solution for making encryption easier to use, yes? I hope that debaffles you a little.
Steganography alone is just security through obscurity? I guess I'm not sure which algorithm you are thinking of but regardless it's very easy to encrypt your data before writing it to the image so in any case, that is a non-problem. The same goes with your sentence about the use of steganography detection. Maybe it's possible for some algorithms, I don't know, but I have very strong doubts about that and again, it's encrypted.
The amount of data you can write to an image using a steganographic algorithm could be rightly called its "storage capacity", yes? Or do you believe that for each image there is an exact maximum storage capacity regardless of the way you encode data to it?
“I think steganography is an excellent way to deliver encrypted messaging to consumers.” is in your prior post.
If you are not using stegonagraphy for the obscurity aspect, why use it at all? Why not just encrypted plaintext that can be decrypted?
Stegonagraphy is intended to conceal that a message is being sent at all, other than the apparent message of an image. If my recipient and I are both using Cool Stegonagraphy Messaging App, or you are marketing CSMA to the general public, that removes that crucial feature.
As far as storage capacity, I mean is not a concept that stegonagraphy envelops. The amount of data you could include would be limited by the lower level transmission systems - whatever software and hardware you are using to actually transmit, device, store and view images such as image format and your phone storage.
I meant messaging in a more general context. It does not remove that feature at all. Why do you think that two people using the same app or algorithm automatically reveals the presence of a hidden byte array? You just vary the way data is read out of the image using the same password used for encryption. Even if they could recover every single bit using statistics (which they can't) they would have no idea what order to put them in. That's just one way of doing it too. If you put a real math wizard on the case I'm sure they could do even better.
Storage capacity IS a function of the algorithm and the image. That's simply a fact. For example, say we are just bit flipping a 512x512px image and we take up all 8 bits in each color channel in each pixel. That lets us write 512 * 512 * 8 * 3 = 6291456 bits or about 6Mb. ...I can see how it looks like I was talking about real time communication because I said messaging. That was a mistake and honestly I have been playing around with the thought of if/how steganography could be used for chat but that really was not how I meant it to sound. I was thinking about how steganography might be able to make encryption more user friendly.
I’m not saying that the message isn’t secure in an encryption sense. It’s just that embedding it in an image has no advantage in an encryption sense, and if the advantage is not secrecy about the presence of a message at all, what is it?
Sure, stegonagraphy has a capacity for information that based on the image format utilized. But the real upper capacity is dependent upon the other layers.
The way to make it user friendly is to make it transparent. I don’t see how this would do that.
Speaking of layers that's how I want to answer the first part. Encryption makes it secure but steganography makes it portable. Steganography is the sugar that makes the medicine go down. That's how I think it could work anyways, I'm not saying that is what would happen.
Certainly for most things I would prefer whatever encryption in transit and whatever data is received is destroyed after viewing. Some people think snapchat is like that, and it's a big reason that many people use it like they do.
However I can imagine some use cases where others would want to keep say a kinky fantasy story someone wrote to them, but need to keep it in a form that if discovered may be difficult to discern that it was a naughty message at all.
Like the "calculator app" that many of the younger folks are using to hide nudes... you'd have a "cool cat memes with friends app" - with some of the images shared having extra data embedded...
some parents and others are getting smarter about seeing the most used apps on a phone, so they are able to question why someone used "hidden locker calculator" 8 hours each day. If you had "cat meme share" being used 8 hours a day, you could open said app and show your parents/lover/ whoever the funny memes.. and they may not know that extra info could be embedded for example.
This may save some people doing bad things, but may also save some people from being outted about their <insert small niche not socially well accepted interest / lover / friend here>
> can pass through any channel that accepts images
No. Any online service worth its salt is going to reencode images to serve proper sizes and maybe do other processing. Along the way stuff like EXIF data and other worthless (for displaying the image) chuff will get stripped from the image. Alternatively, if you mean not somehow embedding in the file but encoding in the actual pixels of the image, that data will get lost as well when the image is resized and resampled. To survive most image manipulations, the data will have to be quite crude and you'll have low bandwidth with this kind of encryption.
An exception would be some photographer oriented services like Flickr that allow you to download the original file but those are a minority.
Yes. Any algorithm designed to be resilient to common processing steps will pass this test with flying colors. Also, EXIF data is not used in steganography, by definition.
Hence why I mentioned that if you encode the data in the actual image (the part that's guaranteed to survive processing,) you cannot do it with very fine elements, like subtly shifting the colors of individual pixels or the like, because an average Facebook JPEG algorithm, for example, will just destroy that. You need to use data points that could survive heavy JPEG artifacting, and that means very few data points per image, and low bandwidth.
That’s a moving target with no guarantees to stay true. Steganography in the use cases you’ve described adds complexity and additional portability challenges over a plain encrypted file.
Steganography has a bad connotation because it's heavily used in the pedophilia realm which would limit it's uptake, somewhat like torrents. Perfectly valid and useful tech that gets used by a few but not by most.
I think Telegram, even with it's flaws, is the closest I've come to an easy to use encrypted messaging app that I can get my mother to use and like.
I don't think anyone cares if pedophiles use it. They only care if it will work for them. Heck, if it keeps pedophiles safe that's a pretty good endorsement. I think the primary road block for most people is not seeing a use case combined with the technology not being readily available (excluding a few apps that aren't compatible with each other).
I think you're grossly under selling the emotional response the larger public user base would have to being associated with paedophilia. Albeit, even if it's tangibly associated via an app.
Unfortunately that's the nature of the beast. You and I, in addition to our peers would probably see it as an endorsement (as you coffecfly stated). But we're not Joe Bloggs.
The feeling of disgust is so easily manipulated amongst the greater public.
Is it? I associate steganography with espionage and consumer printer identification. It's not convenient or practical for most things, but I don't think it has the dark reputation you suggest.
as a teenager, I was a vegitarian. I remember once I was the new guy at a computer repair place, so it was my job to go across the street and get everyone's burgers. This was, of course, the late '90s, and burger king did not have a vegiburger
Anyhow, I go up to the counter and rattle off everyone's order from my list. I'm making conversation with the person at the checkout, and mention I'm a vegitarian (I think sometimes it's a little like crossfit, in that regard) Anyhow, this person mentioned that burger king had vegiburgers, and they could make me one. Excited to have something other than just french fries, I accepted.
So I get back to the office and hand out the burgers. I go to dig into mine, and it's just a bun with way too much mayonase and some lettuce. It was so disappointing.
I'm not a vegitarian anymore, but I do still enjoy vegiburgers, so I will have to go try this out.
I suppose you could and I am certainly no expert so it might even be better that way. A couple cons to that are that it looks cryptic so it's a little easier to detect, that you have to share the URL somehow (as opposed to hanging out in usersub on imgur), it's usually more difficult to deal with large amounts of raw text than an image, and if you use PGP instead of AES->base64 (or something like that) you would have to know the recievers key. I guess that last bit depends on the use case.
I'm not saying either approach is better. Maybe one is better, I don't really know.
I tend to commit, read my commit, and then find all kinds of mistakes and have to amend or redo the changes on a separate branch. If those options aren't available I just have a messed up commit history. This is all because git makes modifying your commit history very difficult to do. I think this immutable feature makes git worse because I have no intention of lying to myself or my team about my commit history. I would just like easier tools for pruning and organizing it. Instead my commit history is always a mess that better represents my knowledge of git than the progress of whatever I'm working on.
Are you pushing the commits with errors? Are you merging the commits with errors? If both of those are true then this sounds like a process issue.
Git makes it extremely easy to edit history, with the ability to amend any commit; even several commits back with simple CLI tools like `git rebase -i <ref>`.
However, what it doesn't like you doing is ripping the rug out from other people i.e. editing history team members are basing their work on.
The entire purposes of distributed version control is that development should always happen on a branch. Whether that's a local branch, or some temporary pushed branch (pull/merge request branch etc.) In both cases you can safely rewrite history.
However, `master` (or whatever mainline branches you have) should never contain simple mistakes (e.g. non-compiling code), because the code should have been reviewed before being merged. Of course bugs (non-simple mistakes) will happen, and these ought to be fixed in future commits. Bugs that make it into pre-release or release builds (i.e. `master`) shouldn't be edited out of history or forgotten.
The biggest part of my problem is a total lack of commit discipline but there are times when I'm working on a branch where my commits don't tell a clear story (changed something then changed it back because I decided to do it a different way). That's when I most wish for better ways to tell that story.
I feel like an idiot for not knowing rebase could solve some of this for me. ...will definitely try it next time.
That back and forth is the most important part of the story! It shows that you thought through multiple approaches to the problem, (hopefully) why they didn't pan out, and they give someone else a starting point for returning to that approach in the future.
It isn't exactly rare that I go through the blame history on some project to find out why something was done in a way that seems stupid at first glance, just to get stuck on a giant squash commit saying "Implemented X".
"back and forth" is not the same thing as "all kinds of mistakes".
No-one cares about stray keystrokes other developers make, it's just noise.
Yes, we absolutely care about the design of the software we're working on, and that's what commit messages, self-documenting code, comments, issue trackers and project management (planning session etc.) are all for.
When you squash commits in Git the default generated commit message is even to merge together all your previous commit messages. Now is your chance to look at those old messages and change "Did X" to "Attempted X, but didn't work because Y".
When I'm investigating when and why some code was implemented the way it is; I don't want to look at a Git blame trying to find when something was changed, just to see that the most recent change was reverting some earlier messing around. Just to git blame again starting from just prior to said messing around, just to see the same thing again - noise is bad!
> No-one cares about stray keystrokes other developers make, it's just noise.
Sure, and `git commit --amend` is fine for those cases.
> When I'm investigating when and why some code was implemented the way it is; I don't want to look at a Git blame trying to find when something was changed, just to see that the most recent change was reverting some earlier messing around. Just to git blame again starting from just prior to said messing around, just to see the same thing again - noise is bad!
I guess that depends on your setup. My Emacs is set up so that `b` is "reblame from before this change". GitHub's blame UI has a similar button (though that, sadly, doesn't preserve in-file context).
At that point the cost of the "noise" is more or less zero.
What if my repository is linked to a CI process that deploys the live code? And tb change needs to be done “now”?
If you are working for a larger company where processes are clearly defined, then it’s good for you and that feature is not needed. But you are loosing all of the agile feature of git in the first place.
In my situation squashing history takes away my other ability to use git as a wiki of “things that didn’t work out”. It’s important to keep that.
Well, the thing is they make it easy to find a main course. Without all the vegetarian and vegan products you would have to sort through the products that aren't explicitly marked that way to get a guess about which ones contain meat. Plus they taste good to me. I'm not looking for something that "tastes like meat" lol. As though I am just dying for meat. Nope, I'm quite fine actually.
Many times I've had the most use for a test that didn't fit into the conventional unit test format but I didn't try to get it approved because I didn't want to get into a dogmatic argument about what a test should or shouldn't be. A lot of what I worry about doesn't get tested well using unit tests.
You're underestimating how much of a pain all of this has become. For one, you'll want it to be HTTPS and right away that always seems to be difficult even when you've done it the same way before. You're going to have to interface with probably paypal which means you get to join the world of JavaScript APIs. This means often hard to follow documentation and bizarre designs. You'll probably want to set up some kind of email system which means using one of the many awesome email plugins (which ends up having a dependency on another plugin but you can't figure out how to install it). Then you have to setup CSS and people will tell you lies about how you should use x or how y will save you so much time but you definitely should NOT use x and y will take more time to setup than you could possibly save. ...just walk away man. Walk away. I'd probably use some kind of Wordpress thing or even start off with square space until I know the idea has wings.
- candidates have zero penalty for applying to something they know they aren't qualified for
- there's constant noise about getting one of those well paying "tech jobs"
- everyone who has ever logged hello world to the console thinks they are the best programmer in the world
- people are desperate and will lie
- the ratio of unqualified to qualified candidates must be 100 to 1
It must be like dying of thirst in the middle of the ocean. The one thing I love about this, though, is the "senior developer" who hasn't read a book in a decade and has to get a new job. ahahahahaha. The moment when that mountain of BS collapses underneath them must be ga-lorious.
Also though, I get the feeling it's a different job market than it was in 2010. I'm still trying to understand what I think I see.
Languages seem to be able to cover multiple platforms now. Programming I don't think is as technical as it once was. At the same time everything seems to be a mess right now. Everything's broken. There's a million and a half frameworks for everything. ...It's like programming as a field has become much more broad, while simultaneously lowering in quality, with knowledge that was once spent on technical mastery now being traded for either lower wages or domain knowledge. So if you went to school and got a degree in programming that was once pretty impressive but now you're just some dude who can "code".