Google Maps often has problems adding stops between where I am and where I’m going. I experience this the most when I’m driving cross country on an interstate and I search for restaurants. It often thinks I should turn around and drive back 5 miles to go to some restaurant that I’m guessing is paying Google to send it traffic.
What I really I want are options between where I am to where I might be in 30 minutes.
That’s because it simply orders results based on distance from your location. That’s it. It just doesn’t have an ordering method that considers your route.
So, I've been in a position where I have helped do address verification for a school district. In some states, schools have to verify where students live because property taxes often pay for schools, so knowing where your students live determines your tax funding (and for missing children laws). I've worked with thousands of addresses now, and seen a lot of uncommon situations.
Google Maps is really bad when streets are even slightly unusual. That's even with North American address conventions, which are extremely regular by comparison to many nations.
For example, if you have both a North and a South version of a road, and the address numbering for North and South versions overlaps -- say, North and South begin at a county midline road and North increases going north and South increases going south -- it will irregularly put a pin on the wrong road or wrong segment of road.
Similarly, it will be confused if you have River Rd and West River Rd. And it can be confused when Oak Ln becomes Oak Ct or insist that Oak Ct is really Oak Ln even when the road name actually changes.
It also gets very confused by roads that have two names. For example if there's a county line road, the east county might call it "Franklin Rd" while the west county calls it "E County Line Rd." And in that case the east side of the road have one set of addresses, and the west have another set. Worse, the address numbers often don't align. Except that's not what Google screws up that often. Instead, Google will sometimes insist that one or the other road doesn't exist at all. It will say that 123 E County Line Rd is actually 123 Franklin Rd, and then it will put a pin where 123 would be on Franklin Rd if that road didn't begin its address numbering at 3000.
Sometimes it insists the city is incorrect, too. If your address is "123 Miller Rd, New London" and New London is a tiny unincorporated town near Portland, Google might translate the name to "123 Miller Rd, Portland." Sometimes even when there's a street the next county over with an address "123 Miller Rd, Portland". In this case if you enter the Portland address, it will point you to the address actually in Portland, and if you put in the New London address, it will show you the New London address... but it will still correct your New London address to Portland.
If you have a road with breaks in it, such as for a river without a bridge, it will occasionally just... put a pin at the end of one segment of the road and not find the address on the correct segment on the far side of the break.
About the only things that's really consistent is: If you zoom in and the pin is in the middle of the road, then Google Maps probably can't find the address. On the other hand, if the pin is off the road, then it's probably exactly on the structure based on the local or municipal authority and their GIS data. In that case, Google found the address on the GIS data they got from that municipality.
And you might say, "Oh, but those are really easily confusing things! It's entirely understandable." And, maybe that's true. But USPS's ZIP code finder still knows the addresses well enough to both find and correct them for you, and ArcGIS interfaces also seem to be able to find things much more easily. Google Maps was groundbreaking 20 years ago. But it really hasn't kept pace. The only thing that seems to confuse the other sites is new construction. At the very least, I wish Google Maps would be more clear when it's guessing rather than when it's found an exact match.
All that is to say, yes, we did use Google Maps to help find addresses. But when things looked even a little weird, we assumed that Google Maps was wrong. And it usually was wrong in those cases. And what it got wrong was sometimes really, really wrong.
I am using both prisma and kysely in the same codebase with a great success. The db schema is driven by SQL, not prisma. It is then introspected by both kysely and prisma, prisma is used in 95% of the places while kysely is used whenever performance is critical or when prisma doesn't support the SQL features we need.
any underlying negative consequences on letting prisma schema handle the underlynig model/migrations
I found out about stackzen yesterday, really like the RBAC/ABAC backed up into the models/codegen stuff, been thinking about just using that for our custom logic and maybe add RLS pg a la supabase but also codegen from the same .zmodel from zenstack model that generates prisma models/migrations have it generate RLS sql migrations code
thoughts??
also maybe postgres views to handle field/attribute level security since rows is mostly about whole columns
main goal is to secure the data at all the levels of the stack from db to api to app so there's no footguns in the future where someone with a pg user or modifying our clients can see data they shouldn't etc
Prisma doesn't cover plenty of SQL features. custom types, more complex indexes (like where clause). It is also a VC backed biznes, need to be ready to drop it at almost any time, SQL/postgres on the other hand is here to stay.
RLS is hard to work with, hard to debug, hard to reason about, cumbersome. It is however powerful.
Right! for RLS i found out about atlasgo, which lets you do Schemas as Code including RLS stuff,
so my mind went to leverage the .zmodel to generate not only the prisma schemas through it and the client api codegen sdk, but also the RLS stuff either with plain sql migrations or a specific framework for rls.
all in all this is probably too much and as long as the app-api level is secure with zenstack and i dont use pg directly anywhere else it should be 'safe' i just wanted to harden all the stack speaking of sorts... idk
I guess the question the is: can't VScode Copilot do the same for a fixed $20/month? It even has access to all SOTA models like Claude 3.7, Gemini 2.5 Pro and GPT o3
Vscode’s agent mode in copilot (even in the insider’s nightly) is a bit rough in my experience: lots of 500 errors, stalls, and outright failures to follow tasks (as if there’s a mismatch between what the ui says it will include in context vs what gets fed to the LLM).
I would have thought so, but somehow no. I have a cursor subscription with access to all of those models, and I still consistently get better results from claude code.
Since the user decides what happens when the state gets updated its up to them to address that. For me, I usually avoid re-renders when possible, I rather update the property associated with the state.
Have you faced any scenarios where that's needed? I'm curious.
> you make it sound like PHP is universally used as the backend solution, while I see it being used little these days except for legacy systems from 15-20 years ago.
reply