Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

this is a really cool feature I did not know about. just curious, why do you use such crazy json in postgres? why not store the data with relational database standards?


I've found it handy for storing configuration data.

E.g. user preferences

    preferences = { 
      "notifications": {
        "email": true, 
        "sms": false 
      },
      "stay_signed_in": true,
      "editor_plugin": {
        "indent_level": 2,
        "autosave": false
      }
    }
It's structured, adhoc, and "less likely" to be queried against. Of course you could implement this in normalized tables, but it's a pain, and I don't see any real value in this case.


DISCLAIMER: I'm not a strong dba, but I'll try my hand at an answer. Please let me know if I'm talking nonsense.

It would be useful if the data meets two requirements:

-you know that it won't be queried in a relational way, but

-you don't know how the schema might evolve (beyond the fields you use in your partial index, that is) and want to save yourself the hassle of frequent schema migrations.


I've been on a project that used postgres' json type as it's main data storage. I wouldn't go down that route again, even now that jsonb is available. My thoughts:

It doesn't really make sense to say that a query won't be relational. Few result sets are ever relational - if they include a JOIN or really any grouping, sorting etc, they are a non-relational projection of your data. That's the whole point of a relational database - relational storage and ad hoc non-relational views of it.

If you don't need to migrate your data when you change the schema (ie: nothing has gone live yet), just truncate the entire database every time you change schema. No need to write migrations and you get to iterate towards a good relational schema.

If you do need to migrate your data when you change the schema, do not under any circumstances choose JSON types. Existing migration tools like Flyway, Alembic, etc only support migrations between relations and you will have to write your own, custom, half-baked and buggy schema migrations. Migrating even relational data is hard and migrating non-relational data is much harder. You won't have the support of postgres' fantastic transactional DDL so it will be hard to do migrations without downtime. Probably the only approach that will work is to migrate data as you touch it.

Even if you decide to go for JSON, make sure you understand the limitations of the JSON type in 9.3 and lower. Equality isn't defined on JSON (you have to coerce to text first) so lots of basic SQL features won't work - for example UNION or EXCEPT. 9.4 has JSONB which is a lot better.


I'm also using the Postgres JSON type in 9.3, but my experience is different, perhaps because I come from the ZODB world and so am not under any illusions that it is a panacea!

I've ended up with a very generalised schema, all content is in JSON. I've not had to change the Postgres schema for over a year and it's easy to upgrade schemas inline while fetching from the app. We did write our own migration tools, but they're only a couple of pages of code.

I don't understand your complaint about lack of transactional DDL when using JSON. It's still there, though probably not as important as most of your upgrades will only touch data, giving you transactional upgrades (should you desire them) in any database.

DB queries of JSON data are definitely slow in 9.3 as the values need to be parsed every query, but the lack of JSON equality has not been a problem. You simply use the double chevron text extraction operators rather than the the json extraction operators in your query, col->foo->>bar rather than col->foo->bar. From there you can cast to any type and join to other tables.

For us, the big advantage of using a generalized schema has been simple support for recording versions of data. With over 30 different content types doing this with a traditional relational schema would have resulted in maybe 90 tables. Our generic schema has six.

I'm really looking forward to 9.4 as it will make ad-hoc queries across JSONB really fast using GIN indexes. I'd love to see the option of creating check / foreign-key constraints on JSON data in the future. Unfortunately it looks like the proposal for this is stalled as the implementation turned out to be rather tricky. http://blog.2ndquadrant.com/postgresql-9-3-development-array...


Thank you for clarifying the relational aspect. This is what I meant to say but I couldn't find a concise way to do so.


In this example, a relational schema would work fine, and might have been easier to read. We use HSTORE at heap (until JSONB ships!) because we're processing event blobs with thousands of different fields, of which most are irrelevant for most events.

The added flexibility benefit is also nice. The ability to add a new property without migrating a schema has saved a lot of work.


type is probably an excellent candidate to get into the schema




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: