Hacker Newsnew | past | comments | ask | show | jobs | submit | ta3411's commentslogin

regarding social interaction, I am running a hybrid US (3 people) - China (4 people) team. My teammates in China just don't usually in meetings / "coffee hours" (mostly due to language and cultural barriers). So any type of cross border team building and social bantering just fall flat. Does anyone have any similar experience and how they address this?


We started building a marketplace startup earlier this year. Back then we were naive and weren't familiar with e-commerce stack (and headless commerce API), so we built everything in house (user management, inventory management, database schema, API, auctions etc.). We have a team of 7 covering across backend, web, iOS, and Android. Are we making a big mistake re-inventing the wheel here? Would love to have some advice on this. When to build vs when to buy when it comes to e-commerce api


My biggest learning from time in the eCommerce world: trivial-sounding things are very, very difficult to get right, and the edge-cases in api-based solutions are entry-level features.

Take a single one of your items: inventory management. There's SOOO much going on here, people look and say "What's the big deal? decrement an integer!" How do you handle payment failures and dunning management? Subscriptions? Bundles? Incomplete carts? Buy online; pick-up in store? blended inventories? multiple locations? The list goes on and the number of interconnected components makes this really hard to solve.

You're building all this, a marketplace and front ends for web & mobile? With a team of 7? I had a team of 23 and struggled to stay on top of all the nuances in just subscriptions, so you're (a) missing important use-cases or (b) way more effective than my former team. Chances are it's somewhere in between.


There's a real "it depends" flavor to any answer you'll get. If your feature set could be handled by off-the-shelf Magento with a couple of plugins, then, sure, you probably should have gone that way. But if you have unusual needs -- and Magento's rigidity means it's not hard to hit that wall -- then either you create massive technical debt kitbashing Magento or similar, or you go with one of the high-end headless systems, which are both expensive and still require you to build front- and backend solutions around them.

In turn, depending on your needs and the regulatory regimes you're working under, you may find that you could have saved time and money going with an industrial-grade solution that can seamlessly handle taxes, accounting and revenue recognition, multi-vendor logistics, credit/refunds, reverse logistics, etc. (there's a lot of etc.!), but that could be overkill for your requirements. I've seen SMBs with small digital teams build and run bespoke e-comm solutions handling all their needs (and more effectively than trying to fit a commercial square peg into a round hole), but the more complex and general their use cases, the more trouble they tend to have, especially when they get into multi-stakeholder retail scenarios (e.g., things like splitting a net-90 invoiced order between multiple future dates, vendors and warehouses, and logistics providers, while acting as a subledger that can handle revenue and income recognition properly).


This seems like an old view of the current ecommerce landscape. Sure Magento is still an option, but as you pointed out it should only be considered for basic implementations where it can be used off-the-shelf with minimal customization.

Building a fully custom solution is the most expensive option today and really unnecessary. There is no reason to build your own product management system when so many flexible options exist. Just as I would never recommend building a server and push people towards containerization and the cloud, I recommend finding components that can be leveraged to streamline the custom build.

In terms of high-end headless systems, the market is large and growing. Some leading composable commerce SaaS offerings can offer free tiers and have pre-built front-ends and integrations.


It's worth considering a composable commerce SaaS. Many let you pick and choose the pieces you need, so you can fill out the remaining components and only replace what you built if their version brings significant value.

At this point, the answer is always buy and then build later if necessary. Just as we once questioned building a server vs using the cloud, using pre-built flexible components gets you to market faster.

Just be careful as the market right now has many monoliths and old systems claiming to be "headless" and "composable", but in reality Magento, Salesforce, Oracle, etc. are all expensive to work with and should only be considered for basic needs.

Looking at a marketplace you can consider marketplace specific vendors like mirakl and convictional, but being niche they can be very expensive. I would instead look at composable commerce solutions that are very flexible and can meet your marketplace needs.


If you're building a marketplace, your SaaS offerings are more limited because the functionality diverges from B2C Retail... as you've probably (re)discovered, content moderation and approval for catalog information from your third-party sellers is crucial[0] and you'll need to solve interesting problems in promoting and suppressing search results when the business asks you to support paid partnerships with some third-party sellers as "official resellers" or sponsored listings.

Right now Mirakl is the big incumbent in marketplace APIs, but whether that's the right answer will depend on your company's scale, technical capabilities, and margins.

[0] I keep a folder of examples of unmoderated third-party seller content embarrassing large companies running marketplaces, this one is my go-to example: https://www.independent.co.uk/news/world/americas/ashli-babb...


I am loving Hasura so far. However, I am getting stuck with how to best write custom logic on top of this. For example: I have a products table, which Hasura will automatically generate schema for. But now I want to do a custom getProducts GraphQL query to do more advanced filtering (search, sort, ranking etc). What I end up doing it to have a remote schema. And now my clients need to understand two different product types (one generated by Hasura, and one generated by my custom remote schema. Do you have any advice on how to best resolve this?

On top of that, do you typically parse these Hasura Apollo into your client models? Or do you use the fragments directly in UI code since it already support types.


Hm…ideally your remote schema can return one or more product ids and then you can create a relationship back to the model from Postgres.

Link to docs: https://hasura.io/docs/latest/remote-schemas/remote-relation...


This is super cool. Any idea how Tris.com is able to get Google Search results?


curious if you prefer Typescript or Go for backend development. My friend group told me they prefer Typescript because of shared code with client and ease of hiring.


I have a nodejs+typescript backend stuck in a limbo state because of the whole esm modules fiasco. I have outdated npm modules I can't upgrade because they are ESM only, and I have old modules that will likely never update.

It's to the point where I will only be using golang for backends moving forward.


I’ve heard the shared code argument before, but in practice the only shared code I’ve really ever seen used is shared type definitions. Now shared type definitions are huge, but they’re a solved problem using JSON schema, GraphQL, or some other interface definition language that don’t necessarily require the same language in the front end and backend.


Love: - Ecosystem - Clean code - Single executable file

Dislike: - Pointers -- maybe I am still getting the hang of it. But I feel like pointers throw off a lot of beginner programmers. Would love some practical advice here


> pointers

Stick with pass-by-value and non-pointer receivers. Use pointers only if you have to.


I worked with one team who deliberately chose pointer receivers for everything. Their reasoning? The compiler can't know if the receiver for the call you are making will be nil at run time, so it doesn't complain.

Yes, they literally chose to subject themselves to runtime panics to silence the compiler.


here’s a good rationale about when to use pointers: https://stackoverflow.com/a/23551970/255463


Thank you for your reply. It seems to me datastudio helps you generate one-off reports? What I am looking for is more of an embedded solution for sellers to log into our dashboard and can slice and dice their data directly on our website.


The reports are interactive and update along with your data, you can slice and drill down as needed. They can also be embedded into your site. https://support.google.com/looker-studio/answer/7450249

You probably want these reports only available to specific sellers, you would be gated by Google auth... so if you're looking for a better embedded solution you may need to find something open source you can self-host.


I am finding myself too locked in with Stripe Connect (we are building a marketplace) where Stripe is handling all KYC, onboarding, payment, and seller payout. Does anyone have any good recommendations on how to build redundancy into this? ie: receive payment via PayPal and build our own payout system


(I work at Stripe.)

With Connect, you can do onboarding flows yourself with the Custom plan. Note that information gathering and regulatory requirements for onboarding across dozens of countries is a huge amount of effort. We have whole teams working on this problem alone; not for the feint of heart.

In the abstract we don’t care if you do onboarding or we do. Stripe doesn’t make money there. We offer a solution because it’s a difficult problem that we’re in a position to handle for users.

With payouts you could do something similar. I believe there are platforms that pay everything out to one bank account and then pay out to customers themselves. I’m not an expert on these flows, but I believe it’s to cut down on foreign exchange fees—preventing multiple “hops”. We’re working on making this better.


Thank you for the candid response. To be honest, another main reason I want to implement other payment provider is in case Stripe Connect bans one of our sellers.

My goal is to have an approved Stripe Connect account that's controlled by my company (the marketpalce) and pull some sellers, who don't want to go through Stripe onboarding/payout, under it. Then i will build a manual payout flow on top of it.

Which platforms are you referring to? I would love to dive deeper in those.


not cashing out in time this March before market took a hit. I lost years of investment gains (down 80%) and honestly feel hopeless if I can ever recover.


This seems like an ideal use case for us. I have a naive thinking of my workflow: can someone please comment if I am off track.

I am building an e-commerce product on AWS PostgresSQL. Everyday, I want to be able to do analytics on order volume, new customers, etc. - For us to track internally: we fire client and backend events into Amplitude - For sellers to track: we directly query PostgressQL to export

Now with this, I am thinking of constantly streaming our SQL table to BigQuery. And any analysis can be done on top of this BigQuery instance across both internal tracking and external export.

Is RedShift the AWS equivalent of this?


As a heavy BQ user on my side projects, there isn’t really an alternative to BQ in AWS. I find that RedShift does not provide a lot of the functionality and ease of use that BQ provides.

That said the closest thing is Amazon Athena.

The architecture would basically be Kinesis -> S3 <- Athena where S3 is your data lake or you can do it like AWS DMS -> S3 <- Athena.

To accomplish this or the redshift solution you need to implement change data capture from your relational DB, for that you can sue AWS Database Migration Service like this for redshift: https://aws.amazon.com/blogs/apn/change-data-capture-from-on...

Like this for kinesis: https://aws.amazon.com/blogs/big-data/stream-change-data-to-...

The reason you may want to use Kinesis is because you can use Flink in Kinesis Data Analytics just like you can use DataFlow in GCP to aggregate some metrics before dumping them into your data lake/warehouse.


BQ is saas proper vs redshift where you have to pick instance sizes etc. It’s amazing, true superpower stuff in how little you have to think about it to get loads out of it.


Redshift has serverless options.


Exactly this (using bigquery but AWS for everything else) is pretty common. It takes a while to build a service like this, AWS spent too long in the wrong direction (redshift) and haven’t been able to catch up.


basically yes. kinesis -> firehose -> s3


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: