Ghidra is a lifesaver for legacy systems that have a bunch of home-spun executables that make bespoke things function and rotate out random technicians over the years, when they fail it is a pain to figure out what they explicitly do, so reverse engineering them is sometimes the only option when a new tool needs to be built that does the same thing but without the parts that are deprecated. I hadn't seen this class before so I look forward to filling in my knowledge gaps around this software, thank you.
I'd recommend binary ninja if you're serious about reversing. Not that expensive for a personal license.
Ghidra is nice, but being FOSS it will always be slightly worse then paid. It's fantastic for free, but not perfect. If reversing is part time/once a month/once every few months then its probably the best choice. Used it for a few years professionally.
binja is my favorite and been using it for the last year or so. just an absolute pleasure to use and collaborate with. IMO the best of all these tools. vector35 are great to work with as well. plugin development is real nice too
IDA pro is the worst. hexrays are plain awful to work with and its so overpriced.
hopper haven't tried, but seems good. mac only though
r2 is interesting. great if you only have a headless connection, but difficult. Learning curve is tough, and payoff isn't necessarily there. an alternative to ghidra if you want free but want to feel more l33t
This is a strange take for me to see, maybe OP doesn't have the context that the US government has been funding Ghidra development for years now (before ultimately open sourcing it), and will no doubt continue to do so for years to come.
This is the software used by NSA and contractors to analyze malware. From a UI perspective I get that it's clunky, but from a capabilities perspective I doubt there is much lacking.
I upvoted something this morning and got an immediate "Are you enjoying Reddit?" popup that was a gateway to leave a review, I am curious if that is related.
This and apps sending advertisements in push notifications are the two things that grind my gears about the iOS experience. The latter is already against the TOS as far as I’m aware but apparently never enforced since basically any food delivery app out there violates it.
> 4.5.4 Push Notifications must not be required for the app to function, and should not be used to send sensitive personal or confidential information. Push Notifications should not be used for promotions or direct marketing purposes unless customers have explicitly opted in to receive them via consent language displayed in your app’s UI, and you provide a method in your app for a user to opt out from receiving such messages. Abuse of these services may result in revocation of your privileges.
It used to say:
> 4.5.4 Push Notifications must not be required for the app to function, and should not be used for advertising, promotions, or direct marketing purposes or to send sensitive personal or confidential information.
I suspect Uber's in too-big-to-fail territory here.
Notification channels is legitimately top 3 features I missed going from Android to iOS as my daily driver. Rideshare apps (Lyft, Uber) are by far the worst offenders, with Amazon coming in a close third. I want notifications when I need to go downstairs and when I can expect service to be delivered, I do not need a notification for 10% off some aspect of the app I’ve never used and will now never use as a result.
It’s wild that I cannot opt out of these forced ads without fundamentally crippling the app itself.
I’m not sure about Uber proper, but Uber eats and DoorDash actually DO let you disable the advertising notifications :) I was so happy when I found that. In Uber eats it’s Account > Privacy > Notifications
Is it fruitless to try reporting a delivery app to apple? I used to get literally 10 marketing push notifications per day from an app so I disabled notifications and now wait for them to call me to get my order (paying COD)
IIRC custom "are you enjoying this app? please review!" popups are against guidelines. Apple has an API to prompt for reviews which it enforces a limit of three prompts per year, and you can (supposedly, i've never seen the option) disable it system wide. https://developer.apple.com/documentation/storekit/requestin...
Both of these are correct, if you use the native API. A lot of apps hide behind their own “are you enjoying” pop up and then only give you the apple API prompt to leave a review if you say yes lol. I also dislike it, and have mine set to not allow in the system settings.
However I also have the native API prompt appear after ~2 weeks of use in my own app, at the end of the day good reviews and actual money to devs is the only way the business works, App Store is super competitive. I’d love to bypass the App Store and not deal with reviews and be able to cleanly offer upgrade pricing!
Yeah, I just tend to give one star and a review along the lines of "was pretty good, but popped up a marketing window that interfered with using the app." At least with the required Apple UI for the popup the app can't filter that out before it goes to Apple, although I assume this kind of review gets filtered out without consequence somewhere in the pipeline.
It's especially frustrating for food delivery apps. Other then messaging apps, there's no app I more want a push notification from than food delivery, but the sheer amount of spam forces me to turn off notifications.
Now my dinner is sitting on our porch getting cold because they "notified" me that my food was here, but didn't knock or ring the doorbell :-(
Hmm, I see a market for an app that can turn on and off notifications, e.g. "Allow Evil Delivery App to send me notifications for the next hour". It'll probably be impossible to pass through the app store overlords.
Or a notification "firewall". I know Pushbullet has permission to see all my notifications and to forward them to my desktop PC, I can also dismiss them from the PC, so I guess it'd be possible to make a similar app that filters notifications and dismiss the spammy ones.
The worst is Snapchat. I like Snapchat, I want to have it installed and with notifications turned on, but the advertising notifications were just beyond the pale. Clickbait text leading to content I have 0 relation to or interest in, sent super frequently
There’s a large body of growth hacking specialists, working in agencies or who go from one company to another, who try anything. They get slapped on their finger a lot but pretty much never face the consequences of pushing the line.
I suspect that some of them have tried that continuously for the last five years, repeatedly got their pop-up censored—until, one day, it went through because the person who cared about it left that position.
I’m astonished that Apple isn’t more proactive on those. Underwhelming apps not being downvoted and pushing people to use browser interactions will kill their ecosystem more surely than fraud.
Then again, the fiasco of trying to find the ChatGPT OpenAI app doesn’t sound like they care enough about that platform.
That one boggles the mind… how long would it take to clean up the imposter apps - a few hours? I guess leaving them up might be a 4D chess tactic against Microsoft
The one reality Apple must face is that if they were hostile enough to large companies in curtailing their bad behavior there would be more incentive for them to make noise about issues with the App store, either via the legal system, legislation, or supporting alternative platforms in earnest.
There's alot of people who, for instance, if Meta, Microsoft, Netflix, and Google all supported a different platform, they could migrate with little effort.
Right now you're usually missing some or all of them which is why alt operating systems often have support for Android apps (APK compat) so users can have a lower switching cost.
Probably that’s it. So apple heavily rate limits this pop up(IIRC, an app can ask for a review 3 times in 365 days), it’s possible that they are spending their allowance to counter for the negative reviews that they are likely receiving right now.
The developer can't know if a dialog is displayed, so it's a standard practice to time these strategically but if you like you can spend it in a row.
I don't know why Android and iOS allow apps to nag people to rate them. Even worse is, they have also shipped SDKs to allow to directly rate the app without leaving it at all. One recent pattern I've also noticed is apps asking you to "enable" notifications after you've blocked them.
"I simply had no idea that ChatGPT was capable of fabricating entire case citations or judicial opinions, especially in a manner that appeared authentic"
This is the main criticism of the service, how did a lawyer miss this talking point?
He was probably in a rush. People use these tools to offload work they might otherwise have done if they had more time.
People who understand the systems limit their use to tasks where small occasional errors are acceptable. People who don't understand the systems are happy to accept any plausible-soundong results, especially if they don't have time to do the work themselves.
I would not be surprised if, sometime in the next year, we see a doctor or two being sued for malpractice after accepting an incorrect diagnosis from ChatGPT. When people are rushed and overworked, and the system is usually correct, these kinds of incidents are almost certain to happen at the scale of an entire society or profession.
I mentioned it before, but I think the wording around this hype cycle has duped non-technical people into thinking the true singularity-level AI revolution just happened.
You scan your face with the headset and the other person sees an avatar that is a scan of your face. Kinda defeats the purpose of "Face"time but I have not used it yet but with the speed of development I could see it improving to the point that the distinction between the avatar and your real face will be hard to tell apart.
A bit of (neutral) clarification, Axie is still active and is in the top ~60 projects by market cap[0], and the heist was actually linked to North Korean state actors[1]
The announcement coming from WWDC makes me think that the first wave is mostly for developers to build on, and then the cheaper/more advanced headsets coming later will be the ones marketed to regular users. I was on the fence until Palmer Luckey gave it a thumbs up, now I am cautiously optimistic.
The rumor mill has been consistent for over a year that the “mass market” iteration is coming in 12–18 months (I’d guess they’re targeting holiday season 2024).
I think people are underestimating how effective a “cost-almost-no-object” version is going to be at generating interest. There will be lines at Apple Stores as soon as it’s available to try.
The model is iPod / iPod mini|nano. Everyone wanted the former, everyone bought the latter.
> I think people are underestimating how effective a “cost-almost-no-object” version is going to be at generating interest. There will be lines at Apple Stores as soon as it’s available to try.
Not to mention all the developers who want to build something for the mass-market iteration. This is going to be like the original launch of the App Store, where even small gimmick apps get a lot of attention. If you want in on the gold rush, you’ll have to buy the first version.
The first thought I had at the release of ChatGPT is how people will react strongly when it doesn't match their internal bias / worldview.
Now I am curious to ask these "uncensored" models questions that people fight about all day on forums... will everyone suddenly agree that this is the "truth", even if it says something they disagree with? Will they argue that it is impossible to rid them of all bias, and the specific example is indicative of that?
The training data is books and the internet. Unless you believe that every book and every word written online is “truth” then there is no hope that such a model can produce truth. What it can at least do is provide an unfiltered model of its training data, which is interesting but also full of garbage. A better strategy might be to train multiple models with different personas and let them argue with each other.
I suppose I am hoping for something akin to the "wisdom of the crowd" [0]
It would be interesting to have varying personas debate, but then we have to agree on which one is correct (or have a group of 'uncensored' models decide which one they see as more accurate), which sort of brings us right back to where we started.
> Now I am curious to ask these "uncensored" models questions that people fight about all day on forums... will everyone suddenly agree that this is the "truth", even if it says something they disagree with? Will they argue that it is impossible to rid them of all bias, and the specific example is indicative of that?
Why would you believe what these models spout is the truth at all? They're not some magic godlike sci-fi AI. Ignoring the alignment issue, they'll hallucinate falsehoods.
Anyone who agrees to take whatever the model spits out as "truth" is too stupid to be listened to.
Apologies for not being clear, "truth" might be too triggering of a word these days, but the models will need some level of objective accuracy to be useful, and it would be interesting to see where the "uncensored" models fall on that accuracy metric, and how people react to what information it displays once the guard rails are off.
> Now I am curious to ask these "uncensored" models questions that people fight about all day on forums... will everyone suddenly agree that this is the "truth", even if it says something they disagree with? Will they argue that it is impossible to rid them of all bias, and the specific example is indicative of that?
I feel like it will achieve the opposite - an uncensored model is one that wears its biases on its sleeve, whereas something like ChatGPT pretends not to have any. I'd say there's a greater risk of people taking ChatGPT as "truth", than a model which is openly and obviously biased. The existence of the latter can train us not to trust the output of the former, which I consider desirable.
Easy to check on wikipedia, looks like it was created by the user Ezlev [0], who does not appear to be crimew's wikipedia acocunt, and updated by several other users over the last couple years.
Because this literal fugitive hacktivist is clearly incapable of finding creative ways to get diversified contributions to their Wikipedia page. Heck, they even proudly link to their WP page from their personal site. If they didn't write the article themself, then it was most likely a friend who's close enough to write something that screams self-authored page. Like c'mon, just look at the thing. Have you ever seen that many citations on something that isn't just spamming them trying to stave off the Wikipedia police?