Is there a solution that doesn’t expose you to the entirety of the internet at one time? Proactive networking maybe? Asking existing employees if they have friends looking for jobs?
I’ve always wanted better search and chat organization.
But I’m at a place where I can’t determine if the ephemeral UX of chatting with AI (ChatGPT, Claude) isn’t actually better. Most chats I want to save these days are things like code snippets that I’m not ready to integrate yet.
You could join my native, cross-platform client waitlist[1] if you're looking to use your OpenAI API key. Work-in-progress but it's coming along pretty fast.
You might want to check out <https://martiansoftware.com/chatkeeper>. It's a cli that syncs a ChatGPT export with local markdown files. I use it to keep my conversation history in Obsidian, where I can search through and link my conversations to my other notes. (full disclosure: it's my project)
That is a perfect use case for having an extension like this. It makes it easier for you to jump back into a previous conversation and is primarily what I use for as well.
Cool tool. From my experience the PDF was easy to traverse.
The hardest part for me was understanding that treatment options could differ (i.e. between the _top_ hospitals treating the cancer). And there were a few critical options to consider. NCCN paths were traditional, but there is in between decisions to make or alternative paths. ChatGPT was really helpful in that period. "2nd" opinions are important... but again you ask the top 2 hospitals and they differ in opinion, any other hospital is typically in one of those camps.
This is true. But this is an ability of the hardware owners. Intel and NVIDIA are not setting the rules - and there is a real commitment to that because its open source.
It's also confidential. Data, code, rules, ... all of these are processed together in secure enclaves. It's up to the hardware owner/users to determine that processing and to stamp/verify what they want.
BTW it's also a measure to ensure your own standards are met - in distant execution, e.g.where you can ensure your data is processed privately, or your end of a contract is adhered to (something that we think resonates with an agentic/autonomous future).
"How can we ensure that the system enforces the rules that I want"
You lose some benefits around decentralized trust & temporal anchoring. But not all. DLT are established in software supply chains and are being adapted to AI supply chain (see below). It's not indicative of a "crypto" play.
(your case is not the direct point, but the measures are a part of strengthening the supply chain[1]. Other application include strengthening privacy [2])
Verifiability measures are designed to transform privacy and security promises from mere assurances into independently checkable, technical guarantees. _Generally achieving_: verification of claims (from governance/regulation, to model provenance), cryptographic attestation ensuring code integrity, enforceable transparency through append-only logs and tooling, no blind trust-but verifiable trust, a structured environment for ongoing scrutiny and improvement.
Regardless it’s about a trusted observation - in your metaphor to help you prove in court that you weren’t actually speeding.
Apple deploys verifiable compute in Private Cloud to ensure transparency as a measure of trust, and surely as a method of prevention whether a direct method or not (depends on how they utilize verifiability measures as execution gates or not).
Reads as compliance controls being embedded into the code with integrated gates to halt execution, or verify controls are met at runtime - providing receipts with computed outputs. This is generally oriented toward multi-party, confidential, sensitive computing domains. As AI threat models develop, general compliance of things during training, or benchmarking, etc become more relevant as security posture requires.
Using all the top sites as well that are supposed to make the hiring process easier.