Hacker Newsnew | past | comments | ask | show | jobs | submit | jamamp's commentslogin

> Another drawback is the difficulty of learning Typst. The official documentation is confusingly organized, with information scattered unpredictably among "Tutorial", "Reference", and "Guides" sections.

I would have thought that this method of organizing documentation is preferred, as I assumed The Grand Unified Theory of Documentation[0] was well known and liked.

[0] https://docs.divio.com/documentation-system/


Something I wish SCIM did better was break apart group memberships from the user resource. In the realm of SCIM's schema with the ability to have write-only, read/write, and read-only properties it makes a ton of sense to have a user's group memberships read-only and available to look at easily. But sometimes populating the list of groups a member is in can be taxing depending on your user/group db (or SaaS) solution. Especially because this data is not paginated.

SCIM allows clients to ignore the group membership lists via `?excludeAttributes=groups` (or members on the group resource). But not all clients send that by default. Entra does well to only ask for a list of groups or members on resources when it's really needed in my experience.

Some enterprise customers use SCIM with tons of users. Querying for the users themselves is simple because querying users is paginated and you can constrain the results. But returning a single group with 10,000 users in a single response can be a lot. It only really contains the user's identifier and optionally their display name, but if you have to pull this data from a paginated API it'll take a while to respond. Or it could still be taxing on some databases.

It'd be nice to query `/Users/:id/groups` or `/Groups/:id/members in a paginated fashion similar to `/Users`.


Another point: the SCIM schema can be confusing. The RFCs make it seem like you can define your schema however you like, and it provides a default schema with which it bases examples in other parts of the RFC.

In reality, most systems expect you to have the full default schema present without modifications and might complain when items are missing. Do you provide scim support without passwords (only SSO)? Okta will send a password anyway (random and unused). Does your application not differentiate between username and email? IdPs will complain if they can't set them separately. Do you not store the user's `costCenter`? IdPs will get mad and keep trying to set it because it never sticks.

Some of the time, you'll have to store SCIM attributes on your user objects which have no effect on your system at all.

The other side is making custom schema items. SCIM has you present these in the `/Schema` endpoints. But, no system (that I know of) actually looks at your schema to autopopulate items for mapping. Entra and Okta are great at letting your provide mapping from an IdP user to a SCIM user, and then you map SCIM users back to your app's users. But you typically have to follow app documentation to map things properly if it's not using the default schema entirely.


To really support Entra in particular, you must have to reference Entra's implicit spec, which is roughly documented here:

https://github.com/AzureAD/SCIMReferenceCode/tree/master/Mic...

One way this comes up is that the way those C# objects serialize, there are properties that Microsoft will send you in form `"key": { "value": "xxx" }`, but which they expect that you read back to them of the form `"key": "xxx"`.

It's best to not take the SCIM RFCs too literally.


The RFC is very clear about how extensions are supposed to be registered with IANA, which is always how RFC extensions in general work. You cannot have interoperability without a central registry.

https://datatracker.ietf.org/doc/html/rfc7643#section-10.3


There's the RFC way and then there's the real way.

IMO, many folks want SCIM with to support only two providers: Azure AD/Entra and Okta.

I guess there's a third: a homegrown system an enterprise has that "supports SCIM". That one is always going to be weird.

So in reality those two vendors get to determine acceptable behavior for SCIM servers (the data stores that push data into SCIM clients like Tesseral).


Completely disagree. I work in the field and in my experience people use lots of SCIM servers, many of which home-grown since it’s not that hard to implement only the bits of the specs you need. And interoperability is quite good, better than with OAuth. The two vendors you mentioned are almost never mentioned by our customers in relation to SCIM.


You're right. Section 10.4 does make that more clear as well for the default schemas.


I like Go's explicit error handling. In my mind, a function can always succeed (no error), or either succeed or fail. A function that always succeeds is straightforward. If a function fails, then you need to handle its failure, because the outer layer of code can not proceed with failures.

This is where languages diverge. Many languages use exceptions to throw the error until someone explicitly catches it and you have a stack trace of sorts. This might tell you where the error was thrown but doesn't provide a lot of helpful insight all of the time. In Go, I like how I can have some options that I always must choose from when writing code:

1. Ignore the error and proceed onward (`foo, _ := doSomething()`)

2. Handle the error by ending early, but provide no meaningful information (`return nil, err`)

3. Handle the error by returning early with helpful context (return a general wrapped error)

4. Handle the error by interpreting the error we received and branching differently on it. Perhaps our database couldn't find a row to alter, so our service layer must return a not found error which gets reflected in our API as a 404. Perhaps our idempotent deletion function encountered a not found error, and interprets that as a success.

In Go 2, or another language, I think the only changes I'd like to see are a `Result<Value, Failure>` type as opposed to nillable tuples (a la Rust/Swift), along with better-typed and enumerated error types as opposed to always using `error` directly to help with error type discoverability and enumeration.

This would fit well for Go 2 (or a new language) because adding Result types on top of Go 1's entrenched idiomatic tuple returns adds multiple ways to do the same thing, which creates confusion and division on Go 1 code.


My experience with errors is that error handling policy should be delegated to the caller. Low level parts of the stack shouldn't be handling errors; they generally don't know what to do.

A policy of handling errors usually ends up turning into a policy of wrapping errors and returning them up the stack instead. A lot of busywork.


At this point I make all my functions return error even if they don't need it. You're usually one change away from discovering they actually do.


> If a function fails, then you need to handle its failure

And this is exactly where Go fails, because it allows you to completely ignore the error, which will lead to a crash.

I'm a bit baffled that you correctly identified that this is a requirement to produce robust software and yet, you like Go's error handling approach...


On every project I ship I require golangci-lint to pass to allow merge, which forces you to explicitly handle or ignore errors. It forbids implicitly ignoring errors.

Note that ignoring errors doesn't necessarily lead ti a crash; there are plenty of functions where an error won't ever happen in practice, either because preconditions are checked by the program before the function call or because the function's implementation has changed and the error return is vestigal.


Yet the problem still has happened on big projects:

https://news.ycombinator.com/item?id=36398874


Pedantically, every single one of those examples are a case of unspecified behaviour, not bugs. There may be no meaningful difference to the end user, but there is a big difference from a developer perspective. Can we find cases of the same where behaviour was specified?


> which will lead to a crash

No it won't. It could lead to a crash or some other nasty bug, but this is absolutely not a fact you can design around, because it's not always true.


I just want borgo[1] syntax to be the Go 2 language. A man can dream...

[1]: https://borgo-lang.github.io/ | https://github.com/borgo-lang/borgo


At this rate, I suspect Go2 is an ideas lab for what's never shipping.


I have to ask, in comparison to what do you like it? Because every functional language, many modern languages like Rust, and even Java with checked exceptions offers this.

Hell, you can mostly replicate Gos "error handling" in any language with generics and probably end up with nicer code.

If your answer is "JavaScript" or "Python", well, that's the common pattern.


In primarily throwable languages, it's more idiomatic to not include much error handling throughout the stack but rather only at the ends with a throw and a try/catch. Catching errors in the middle is less idiomatic.

Whereas in Go, the error is visible everywhere. As a developer I see its path more easily since it's always there, and so I have a better mind to handle it right there.

Additionally, it's less easy to group errors together. A try/catch with multiple throwable functions catches an error...which function threw it though? If you want to actually handle an error, I'd prefer handling it from a particular function and not guessing which it came from.

Java with type-checked exceptions is nice. I wish Swift did that a bit better.


I'll also agree with you.

I want to start with the fact that building FlowRipples is a monumental feat of its own. Generic tools that are adaptable to lots of situations is a difficult task, and it's impressive what was built.

But the supporting functionality in any service like this is also so important. It's one thing to have a low friction setup and way to get started, with simple steps and a quick showcase video, so that someone can get to tinkering. It's another thing to fully adopt this as a tool within your team that would be integrated into a published product.

Suddenly, like you say, you have multiple environments (Dev, QA, Staging/Pre-prod, Prod) that you have to move changes into and out of. Replicating the same changes manually will inevitably lead to human error and what worked in QA will no longer work in Staging or Production. Even a simple export + import helps with this.

I think one thing that also needs attention is parallel changes. Two people are wokring on different changes in the dev environment. Promoting the current state of the Dev environment to QA requires that both tasks have to be dev-complete, or else unfinished changes could make its way to QA and cause confusion. This is difficult when the different tasks aren't synchronized in their testing (i.e. start testing one ticket but not necessarily the other). It's almost like you need branching and merging and diffing, a la git, to help resolve this. That's difficult to do in low-code visual programming apps.


I wonder how this compares, conceptually, to Temporal? While Temporal doesn't talk about a single centralized log, I feel the output is the same: your event handlers become durable and can be retried without re-executing certain actions with outside systems. Both Restate and Temporal feel, as a developer coding these event handlers, like a framework where they handle a lot of the "has this action been performed yet?" and such for you.

Though to be fair I've only read Temporal docs, and this Restate blog post, without much experience in either. Temporal may not have as much on the distributed locking (or concept of) side of things that Restate does, in this post.


Temporal is related, but I would say it is a subset of this.

If you only consider appending results of steps of a handler, then you have something like Temporal.

This here uses the log also for RPC between services, for state that outlives an individual handler execution (state that outlives a workflow, in Temporal's terms).


That makes a lot of sense, thank you! Extending out to other operations and not just event handlers/workflows would be neat.


Dear Okta, please include your OIDC profile claims in your ID tokens.

Actually no, that's on the spec for not enforcing they're in the ID token, and only must be available in the userinfo endpoint.


At one time, this was my product.. and oof, this one still hurts.

Section 5.1 of the OIDC spec says the standard claims can be in the ID token and/or at the Userinfo endpoint. Further, Section 2 says "ID Tokens MAY contain other Claims."

Unfortunately, one of the most common use cases for the ID token was to add someone's Groups, usually from AD. We had a number of customers with users who had a LOT of groups. I remember one where their users were in an average of 700 groups and one user had ~9000. These groups could be anything from the AD group created yesterday for a new app to that group from 15 years ago that no one wanted to delete just in case. This made for gigantic tokens.

Anyway, to address this scenario, someone at Okta came up with concept of the "fat ID token" and the "thin ID token". The "thin" would always come back with the access token on the inital request and the "fat" would only be available via the userinfo endpoint where we weren't limited by payload sizes.

So yeah, now you know and sorry about that.



We were using AWS Cognito and had to make a "pre-token-generation lambda" to filter out only the AD groups we cared about. We had a huge map of AD group IDs to our internal group names (multi-tenant application, so each client had a different AD group ids) so we filtered out the ad-groups and added a new custom claim with our internal names.

Fun that one time where we gave admin access to some people that shouldn't have it.

Before we added that map some of our user's tokens were exceeding the limits for AWS Cloudfront cache keys.


Cosign. AAD ID tokens had the same issue, and we'd see tokens with 1500+ guids shoved in there for the group ids. We had (and have) a nasty "too many, go call this API you need a new permission for" outcome there.

Right next to those customers were the ones demanding our tokens always be an exact number of bytes (yeah. Really).


Ha, I dealt with some of those.

The other challenge was customers who wanted groups to show up in a particular order. Since it's was literally just an array (no keys), it was just a giant alphabetized list.

Then the problem came when people used group count limits. Your group "AppDev" was always fine but "TestGroup7" may not be.. depending on how many groups the user was in and/or how they filtered those groups.

Figuring out that one out was terrible.


I'd like to add that so many providers do not support either `prompt=select_account` or just natively ask the user which account to login to, mainly for OIDC. Working with IAM systems at work and using different test accounts, it's frustrating when you can't easily log out of the destination IdP for, say, SSO.


It absolutely grinds my gears - Chrome's profile system and / or Firefox's container tab system work somewhat, but it feels like a bandaid fix.


Do you want select account, implying the site supports multiple accounts at a time, or just prompt=login?

We're still shaking out bugs and bad behaviors after adding multi account on GitHub, I get why folks might not want to implement it.


My experience with `prompt=login` is also mixed. Okta's behavior does not indicate which account you're logging into (no username/email address), and only asks to re-input your password. They have a "Back to sign in" link button, but that loses all OAuth context and does not lead you back into the app you're attempting to OAuth into, unless if you specifically override that button to hit Okta's logout endpoint and with a redirect back to your OAuth authorize endpoint/session.

It's janky. And I would know because we had to implement that at work.


If you thought random bit flips were bad, wait until you get random tit flips.


  Location: San Diego, United States
  Remote: Yes
  Willing to relocate: No
  Technologies: Senior Software Engineer, backend and cloud, Go, .NET, Swift, AWS, Terraform, Web (HTML + frameworks), Okta/IAM/OAuth/OIDC
  Résumé/CV: https://jamesnl.com/resume.pdf
  Email: james.n.linnell@gmail.com


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: