I read this yeaaaars ago. I'm about to re-read this, but before I do, I think this was the article that installed a little goblin in my brain that screams "TTS" in instances like this. I will edit this if the article confirms/denies this goblin.
It takes too long to iterate on a character design.
For more explanation: I've been playing around with stable diffusion on my laptop recently; I have a gtx 4070 with 8GB dedicated VRAM so it's not nothing.
The main problem I have is that it takes a lot of iteration on a prompt to get what I want, at lower resolution and sampling steps, before I know that I'll get roughly what I want.
I tried making a character in Eggnog, and before I could be sure what I was getting, it told me it'd take 15-20 minutes to be ready. I worry that this will just make me wait a long time for a character that isn't what I want, and starting again too many times will put me off.
The iteration and feedback loop needs to be tighter in my opinion, or people will get unsatisfactory results and be unwilling to go back and fine tune.
Thanks, this is helpful feedback. We're definitely frustrated with how long it takes to load a character. We'll see what we can do to give a better sense of what the character will look like before the training job kicks off. We should be able to show some intermediate results.
I think you're talking about spinning up a temporary environment running the code and connecting via a local IDE to inspect it, whereas OP is talking about hosting the IDE remotely.
At Google, people can use "Cider" which is a web browser based IDE, and they can use a "Cloudtop" which is a desktop virtual machine provisioned via Google's cloud infrastructure, as alternatives to a dedicate physical workstation.
I use InfluxDB for this, it comes with a frontend UI and you can configure Telefraf as a statsd listener, so the same metric ingestion as datadog pretty much. There are docker containers for these, which I have added to my docker-compose for local dev.
I think it does log ingestion too, I haven't ever used that, I mostly use it just for the metrics and graphing.
I work as a contractor so I move between places. I have found a few companies trying to introduce kafka, and every time it has been a solution in search of a problem.
I don't doubt that it has a good use case but I have so far only encountered the zealots who crowbar it into any situation and that has left a residual bad taste in my mouth. So I fall into the "hate it" side.
> and every time it has been a solution in search of a problem.
More refined to this, in my experience at the last two jobs, the queue problem is there, but the Kafka solution is based solely on "enterpriseyness" of Kafka, not any practical reason. RabbitMQ is highly performant, SQS is really easy. Both are great queues. Kafka is muuch more, yet, Kafka is chosen because "it's enterprise."
> A classic sign of "you wanted an MQ" is when a consumer writes a message to a topic to let the producer know it read the message the producer wrote...
Oof. Queued RPC is such a siren song; so many developers either stumble into this pattern or seek it out. And it's such a pain. Suddenly the latency of (often user-sensitive) operations is contingent on the latency of a queue consumer plus the time it takes to process everything in the queue before the RPC was issued. Goodbye, predictable turnaround times.
Maybe "it's enterprise" means that's what the enterprise standardized on. There are a couple of practical reasons that come to mind on why that's the case - a) it's more resilient and durable than messaging platforms, and b) it is a platform of dumb pipes, so to make it a central data bus managed by platform teams means that they don't have to get into the detail of which queues perform what functions, have what characteristics, etc. Rather the client teams in the various business units can take care of all of their "smarts" the way they want. It also handles log/telemetry ingestion, data platform integration, and interservice comms use cases which is pretty multi-functional. That's the primary reason why Kafka has become such a pervasive and common platform, it's not because it's trendy, in fact most operations teams would rather not even have to operate the kafka platform.
RabbitMQ is "highly performant" is a handwave. The words tell me nothing, just like any other tech/software that is described as "powerful".
In my last two major gigs, RabbitMQ was already being run in a clustered config, and it was not going well. Both places were in the process of doing arch changes to do a change to Kafka.
It seems like something that works great in a big scaled node and you can go to big nodes these days, but I don't think it is ready for cloud/distributed durability.
I'm not aware of Jepsen testing of RabbitMQ in distributed mode for example, and I wouldn't consider any distributed/clustered product that hasn't let Jepsen embarass it yet.
Cassandra and Kafka are frequent examples of YAGNI overengineering (although the fault tolerance can be nice without scale), the reality is that pumping up single-node solutions for too long is a big trap. Projects that start to stress single-nodes (I'm thinking like a 4xlarge anything on aws) should probably get to thinking about the need for jumping to dynamodb/cassandra/bigtable/kafka/etc.
RabbitMQ --> Kafka is a pretty easy lift if your messaging has good abstractions.
relational DB --> Cassandra is a lot bigger headache because of the lack of joins.
I have had to make that Clustered RabbitMQ to Kafka move myself, as the failure modes from RabbitMQ we're very scary. The most scary thing in the entire infrastructure in that financial institution levels of scary. It's not that it failed much, but you don't need many middle of the night calls with no good SOP to get the cluster back to health before migrating is in the cards.
Kafka is not operationally cheap. You probably want a person or two that understands how JVMs works, which might be something you already have plenty of, or an unfortunate proposition. But it does what is on the tin. And when you are running fleets of 3+ digits worth of instances, very few things are more important.
I have a dim view of almost all inherently single-node datastores that advertise a clustered hack (and they are hacks) as a patch-on (yes, even PostgreSQL). Sure it will work in most cases, but the failure modes are scary for all of them.
A distributed database will have network failures, will have conflicting writes, will have to either pick between being down if any of the network is down (CP) or you need a "hard/complex" scheme for resolving conflicts (AP). Cassandra has tombstones, cell timestamps, compaction, repair, and other annoying things. Others databases use vector clocks which is more complex and space intensive than the cell timestamps.
It's tiring to have move fast break things attitudes applied to databases. Yeah, sure your first year of your startup can have that. But your database is the first thing to formalize, because your data is your users/customers, you lose your data, you lose your users/customers. And sorry, but scaling data is hard, it's not a one or two sprint "investigate and implement". In fact, if you do that, unless you are doing a database the team has years of former experience with in admin and performance, you are doing it wrong.
"AWS/SaaS will eliminate it for me"
Hahahahaha. No it won't. It will make you life easier, but AWS DOESN'T KNOW YOUR DATA. So if something is corrupted or wrong or there is a failure, AWS might have more of the recovery options turnkeyed for you, but it doesn't know how to validate the success for your organization. It is blind trust.
AWS can provide metrics (at a cost), but it doesn't know performance or history. You will still need, if you data and volumes are any scale, how to analyze, replicate, performance test, and optimize your usage.
And here's a fun story, AWS sold its RDS as "zero downtime upgrades". Four or five years later, a major version upgrade was forced by AWS .... but it wasn't zero downtime. Yeah, it was an hour or so and they automated it as much as they could. But it was a lie. And AWS forced the upgrade, you had no choice in the matter.
Most clustering vendors don't advertise (or don't even know) what happens in the edge cases where a network failure occurs in the cluster but the writes don't propagate in the "grey state" to all nodes. Then the cluster is in a conflicted write state. What's the recovery? If you say "rerun the commit log on the out of sync nodes" you don't understand the problem, because deletes are a huge wrench in the gears of that assumption.
From my understanding of Cassandra, which kafka appears from the numerous times I've looked to be similar too with quorums and the like, it's built on a lot of the partition resilient techniques.
Not your main point, but MongoDB didn't commission Kyle to do that report as they had in the past, he did it on his own time. That's why his report doesn't mention repeat testing. They do actually run his tests in their CI and those new tests were used to isolate that specific bug. Moreover, some of the complaints about weak durability defaults for writing were later fixed: https://www.mongodb.com/blog/post/default-majority-write-con.... They still do default to a weak read concern, but writes are fully durable unless you specifically change the behavior. For what it's worth I agree with Kyle that they should have stronger defaults, but I don't really see a problem with MongoDB's response to the report because there is room to disagree on that.
Do you have a source for this? I got the impression at the time that there was some commissioning of his services, but that they didn't like the report. But he publishes work, and released the report, which forced them to deal with it.
Every distributed tech fails when he test it, but the tenor and nature of the report for MongoDB was different. It basically said between the lines "do not use this product".
MongoDB has a history of really crappy persistence decisions and silently failed writes, and as soon as it gets publicized saying "we fixed it in the next release". The same thing happened here of course. I simply don't trust the software or the company.
Mysql has the same annoying pattern in its history, although I have more confidence in the software because of the sheer number of users.
Still, I would probably pick PostgreSQL for both relation and document stores.
Source for which claim? Kyle was paid for work testing 3.4.0-rc3[1] and 3.6.4[2] which analyzed single document concurrency in a sharded configuration. Those tests run in their CI [3]. MongoDB had some somewhat misleading copy on their website about the result of those tests, so Kyle decided to test the new multi-document transactions feature for 4.2.6 and found some bugs.
It's fair to not trust the database or company, I don't blame you for that. But I think Kyle's MongoDB 4.2.6 report was not nearly as concerning as his PostgreSQL 12.3 report which found serializability bugs in a single instance configuration, among other surprising behaviors. MongoDB's bugs were at least in a new feature in a sharded configuration. I don't think his most recent report was actually as negative as it may read to you. I say this as someone who mostly runs PostgreSQL, by the way!
As a side note I believe there are consistency bugs existing right now in both MongoDB and PostgreSQL (and MySQL and Cassandra and Cockroachdb and...) waiting to be discovered. I'm a jaded distributed systems operator :)
I always find the "it's enterprise" statement so humourous, given how much time I've had to invest in convincing enterprises that Kafka wasn't some weird fly-by-night technology that couldn't provide for the demanding enterprise.
The biggest issue for me is people using kafka for mqtt, mqtt is a pub/sub broker already. The other issue is thinking of Kafka as some kind of “innovative” data ingestion tool, so now instead of 50 extract jobs per day, you got to reconcile millions of events in realtime. I think message brokers make sense, but they are message brokers, nothing else, no?
Say I want to log all http requests to the server (I know I said a keyword of log) and then process those logs into aggregates, stick them in a time series.
Would it be insane to "log" everything into kafka? Or what would be the more "correct" tool for that job?
That's why that sentence in the article "but almost every technology company uses it." should be rephrased to "but almost every technology company do not need it"
I don't think so. I played the original Civ on my Amiga, I love #6 but my favourite was probably #4. I'm not sure where that'd put my estimated age but likely lower than it is.