And so what? Given the display of men's feelings w.r.t. to "mixed groups", I (heterosexual male) get the ick about some people here... For me most of it is about the space/relationship where certain things should happen, but I guess scientific misogyny is a thing too.
also software is a factory and the LLM is a workshop building factories. Also I strongly believe that people building factories still do a lot of a) ten people pounding out the metal AND b) measuring to check
oh, they are available to startups. Startups having the sole purpose of skimming funds by being the technology partner to some academic.
That's the best case. Then there is outright fraud:
https://cordis.europa.eu/project/id/101092295 - European dynamic provides some project management and a wordpress-page for the lump sum of 800k€ and of course there is always "SOCIAL OPEN AND INCLUSIVE INNOVATION ASTIKI MI KERDOSKOPIKI ETAIREIA" headquartered here: https://inclusinn.com/. Probably still in stealth mode, using the 4M€ to "promote innovation".
> Startups having the sole purpose of skimming funds by being the technology partner to some academic.
Is cynicism about motives necessary?
I believe most people have enough self-deception and denial, that we don't need to assume fraud or theft. Similarly charities often end up being self-serving leeches - but the people seem to believe they are helping. Maybe I'm just naive? It is possible that most people in New Zealand are not so focused on intentional theft and fraud?
In New Zealand I've watched our government burn fucktons of money trying to invest in university "innovation". Academics convince politicians that they have valuable ideas, and politicians want to believe universities produce value. However the government funding is horrifically managed (no business sense) and the startups lack the right genetics and fail (even if matched funding from private investors). New Zealanders lack an entrepreneurial learning environment (maybe EU is the same): founding is difficult and really difficult if you've never watched someone close succeed.
I believe the root cause is that academics are not business/financially oriented, so the startups fail because they are not businesses. Plus the organisations picking investments are academic heavy and are not run by good capitalists. Academics often have good valuable ideas. But academics tend not to be hyper-focused just on business outcomes or they are not money focused. Business founders need to focus on profit (not status hunting, and definitely not looking for academic recognition).
I wondered for a while whether the cause was my own selection bias (startups mostly fail so I saw them fail) but I don't think that was the cause.
Also the government investing organisations love heavy handed shitty governance (and legal overcontrol bullshit). The principals believe their advice and overview is valuable. The investors put in bad CEOs and also force the businesses to make poor decisions. I've seen private VC funding make the same mistakes.
It is really sad to see good ideas get murdered by people with government money: I'm sure they believe they are helping and that they are trying to help (I'm not that cynical about motivations).
Our current government has just announced another 100 million to go towards academic startups. I just fucking wish our government would spend the budget instead by removing unnecessary red tape and to improve tax incentives (in New Zealand the incentives to grow businesses or create export income are fucked in my personal experience).
So for workflows it's like Airflow, Brigade or hatchet or ...? How do workflows integrate with k8s (ressources, ...)? Camunda can also deploy natively on k8s. However you still develop apps for Camunda and it seems like dapr is no different there? Why is it in CNCF if it doesn't provide a way to build a workflow out of k8s-native artifacts (PVs, Deployments, Jobs, ...)?
While I don't play Badminton (and so can't test with a racket on hand) this seems very cool! I also thought about something similar for judging bikewheel spoke tension - I guess I have to research this a bit more now.
As for monetization: I personally don't have problems with static ads served from your domain. Find some celebrity or brand and ask them if they want to have you serve their banner.
if spokes come in standard guages, it should work for bikewheels as well. There may be some differences due to the bike rim material, but in case of badminton, I found out that frame material could be ignored safely.
I can tell you the company I work at (4000 people, legacy banking IT) has 4 people running our Datalake. We likely have more people buying/"evaluating" Databricks currently (from overhearing calls in open-plan offices), so I guess they have a point. A very sad point...
So how is this distributed Postgres still an ACID-compliant database? If you allow multiple nodes to query the same data this likely is just Trino/an OLAP-tool using Postgres syntax? Or did they rebuild Postgres and not upstream anything?
You're welcome. I think for the write part, it's always back to the old classic consensus. In then end there always that distributed voting mechanism to decide the write order
we pay millions to Oracle. We hit a bug and it took 6months for them to reproduce and acknowledge there is a bug. they now seem to be on the lookout for someone being able to produce a fix: sales and indian after-sales can't do that... curious!
Oracle seems just a moneygrabbing shell company at this point and I suppose the whole hyperscaler-cloud is developing towards that point with the leaders of those corporations repeating exactly the same talking points...
From my anecdotal experience: no. It is arcane, user hostile and buggy. And performance for many workloads is roughly in line with open source databases.
Some of the tooling around it is nice and it has some nice features but I would not recommend it even if it was free.
Edit: unless the great database is MySQL, they are actually decent stewards of it and while I still strongly prefer PostgreSQL MySQL is pretty good these days.
It's possible it has redeeming features but seems more common to be just legacy. Multiple apps accessing the same DB leading to a gridlock from migration POV. (Plus career oracle DBAs etc in the org).
As Oracle is so expensive it skews the architecture decisions towards multiple apps accessing the same DB.
Even worse. It skews the architecture decisions towards few large physical database servers instead of many small VMs, because licensing cost is per core in the whole VM cluster, so totally unaffordable. So you get reduced availability, higher risk, reduced separation, reduced security, higher datacenter cost, and they bill you an arm and a leg on top...
This often isn't related to a monolith vs microservice comparison. Large enterprises and institutions tend to run a lot of completely separate applications, which then end up sharing database infrastructure unnecessarily. Think of universities, for example.
Oracle extends the problem to the opposite end of microservices, by encouraging monolith DB consolidation, with unrelated monolith applications on the same db cluster for purely budgetary reasons.
> unrelated monolith applications on the same db cluster
If your "db cluster" is split into containers on one VM as you would do in any other cloud (because VMs are expensive), then you would have the same problem.
> encouraging monolith DB consolidation
Does it? I don't think so. I've worked with Oracle's stuff and the only real difference between Oracle Cloud and other clouds is that Oracle cloud is more expensive overall. There's nothing stopping you from running virtual machines and kubernetes in the same way you'd run it in any other cloud.
Yep. And then when the DBs are already on the same servers, when there's a need to connect previously unrelated apps to some master data, a shortcut presents itself. The DBA thinks: After all, why not? Why shouldn't I take it?
If you handle large amounts of geographical data you'll need to invest quite a bit to move to Postgres. It's possible but you're going to need to touch a lot of existing code and figure out new performance characteristics and so on. A lot of it will be hard for an average organisation, not because it's very sophisticated and complex but because it will be large amounts of boring rote work that many developers don't see how they could do programmatically.
Rumour has it the same holds for some other types of data as well but I lack immediate experience in other areas.
With Oracle you also have a rather robust, exhaustive documentation of error messages and even obscure stuff is likely to be figured out in some forum thread by someone and an indian guy. Postgres isn't exactly bad in this area but you can run into things where you need to go deep in debugging yourself and figure out minutiae of your specific version.
Containers also remove most of the issues with running several instances in development and CI environments.
I still don't recommend anyone to pick Oracle for greenfield stuff, instead you should work around shortcomings in other database engines, but for a large organisation with certain demands that already has buyin it makes sense.
PostGIS seems leaps better to me (like the PG DX in other aspects). Eg in Oracle you don't have 2d points. Adding a geo index can fail in the middle and leave the table in a unusable state that requires DBA magic to untangle. Etc.
This is just on top of the general technical inferiority (eg there are no transactional schema changes, so you don't get the safe go/no-go in those when applying those as part of app deploys with a migration tool)
SDO_POINT(x, y, 0) or SDO_POINT(x, y, NULL) ought to do what you want. Index corruption can be a nasty problem on Postgres too.
You need to decide if and how to perform a rollback, similar to how you would define a down() procedure in migration files. A schema change might imply changes to data, and in that case you might turn off client writes, copy the table, change it, validate, do rename dance, turn on client writes again. If it doesn't it might be much cheaper to operate on a single copy. How does Postgres decide on such strategies automatically?
Many moons ago when I was green and my skin was a lot smoother I pointed out to my then boss that we could relatively easily (a few weeks of work) move our product from Oracle to Postgres and save n x $1000 for each installation we shipped to a customer.
My personal goal was to avoid becoming an Oracle expert. (Why? Because even as someone who passed advanced Oracle training easily it was still extremely painful. One mistake towards the end of an installation could easily result in 2 days extra work to clear it out.)
Stupid as I was I said nothing about all the work we went through and only mention all the money we could save.
The response was something I learned a lot from.
It was mild and friendly and something along the lines of "here's what you don't get young lad: the customer pays for the Oracle license on top of our original price and we get a 10% cut. Changing to Postgres will effectively cost us money. Also for <this industry> when we say it is based on Oracle they feel safe."
I'm back at Oracle today after a decade of less painful options and Oracle is still painful but these days I'm not the DBA thankfully and only have to deal with connectionstrings that makes every other database look easy, different SQL syntax etc.
EPM products were originally built on SQLServer (or on nothing, like Essbase), and then adapted to run on Oracle. So it's more like "the products commercially forced to run on Oracle, like EPM".
Not that it matters that much - there are better EPM/CPM products now available, like OneStream ;)
- institutional inertia
- some weird consultant style people in key roles (this happens around cloudy stuff too)
- the DBA-team
- "we can't move everything!"
- "we just migrated off solaris!"
however every new project with sane leadership seems to decide against oracle.
I think they are taken over by exactly the same people leading the AI-hype. Funny how in this article they are a) not advertising clearly what they are doing, b) solving a small subset of problems in a way noone asked for (I think most people just want ROCm to work at all...) and c) just adding to a complex product without any consideration of actually integrating with its environment.
solving a small subset of problems in a way noone asked for
What do you mean? Having ROCm fused MoE and MLA kernels as a counterpart to kernels for CUDA is very useful. AMD needs to provide this if they want to keep AMD accelerators competitive with new models.
should the matrix-multiplication at the core of this not be in a core library? Why are generic layers intermixed with LLM-specific kernels when the generic layers are duplicating functionality in torch?
Upstreaming that might actually help researchers doing new stuff vs. the narrow demographic of people speeding LLMs on MI300X's.
> I think most people just want ROCm to work at all
I think most people don't want to have to think about vendor lock-in related bullshit. Most people just want their model to run on whatever hardware they happen to have available, don't want to have to worry about whether or not future hardware purchases will be compatible, and don't want to have to rewrite everything in a different framework.
Most people fundamentally don't care about ROCm or CUDA or OneAPI or whatever else beyond a means to an end.
reply