And you're suggesting what, that we hand the reins to the democratic process you just described as flawed, one that is corrupted by lobbying? Yeah why don't we just take a corrupt institution and put it in charge of more things.
Canada has forbidden the sale of cigarettes in grocery stores, and they've banned the display of any advertising for them in the stores that are still allowed to sell them.
You know, children's advertising in the 1980s and 1990s was heavily researched by psychologists and economists with the explicit goal of manipulating the emotional state of children to get them to bother their parents for toy purchases. I don't have any books or articles off the top of my head, but I've read several articles and analyses of how the pacing, pitch, and coloring of advertisements intended for children were designed to be as stimulating and upsetting as possible. There was also research put into when during the day and year you could play these commercials to the greatest effect, and the advertising slots at certain times were priced higher in response to this.
It is possible that this doesn't harm children. However, the deliberate injection of emotional manipulation and disunity into families with young children in order to stimulate the purchase of toys doesn't strike me as particularly good for the nation, and I don't know that I would have to think very hard about my decision if given the option to prevent it using the law.
It's pretty interesting to think about happens from an economic perspective. Say I buy a house on loan (money withdrawn from capital markets). I hand cash to the previous owner and they use it to pay back their loan (return money back to the capital markets, keep the difference).
Consider the synchronous chain.
Me: Myprofit=future_profit-cost1
Prev owner: profit=cost1-cost2
Prev owner 2: profit=cost2-cost3
And so on...
Also consider a parallel behavior where I can simultaneously buy and sell multiple properties though debt.
It would be fun to model this whole chain and understand what this recursively unfolding process actually does with capital. Is it a capital sync? What behavior does it incentivize? Does it guarantee expansion/recession cycles?
the capital consists of pre-existing capital, and newly created capital from debt (from a bank). This is literally how money is created (vs direct printing from the Feds).
Your advice is somewhat dangerous if people follow it expecting them to magically work. First, the NASA study is for VOCs, not pm2.5. Second, when I briefly looked into using plants to clear VOCs indoors, I found that the number of plants, lights, and air-circulation required is completely impractical. Plants are awesome, I love them, but they're not enough. I wish it wasn't so.
It would be stupid to expect plants to clean up PM 2.5 material. The best they can do is chemicals. Sure they aren't a perfect solution but then again, anecdotal evidence from my childhood says thatmore plants are better for our environment. Indoor/outdoors doesn't matter.
And yes it isn't a magical solution, we have to be careful while placing plants in our house.
I'm Derek, one of the co-founders--excellent question!
The former. PipelineDB performs aggregations in memory on microbatches of events, and only merges the aggregate output of each microbatch with what's on disk. This is really the core idea behind why PipelineDB is so performant for continuous time-series aggregation. Microbatch size is configurable: http://docs.pipelinedb.com/conf.html.
Can you say a bit more about "performant" or point me to some information? I haven't found any yet. I'm processing millions of protobufs per second and would love to get away from batch jobs to do some incredibly basic counting -- this seems like a fit conceptually...If its a fit, any recommendations on the best way to get those protobufs off a kafka stream and into pipelinedb would be great, too!
Performance depends heavily on the complexity of your continuous queries, which is why we don't really publish benchmarks. PipelineDB is different from more traditional systems in that not all writes are all created equal, given that continuous queries are applied to them as they're received. This makes generic benchmarking less useful, so we always encourage users to roughly benchmark their workloads to really understand performance.
That being said, millions of events per second should absolutely be doable, especially if your continuous queries are relatively straightforward as you've suggested. If the output of your continuous queries fits in memory, then it's extremely likely you'd be able to achieve the throughput you need relatively easily.
Many of our users use our Kafka connector [0] to consume messages into PipelineDB, although given that you're using protobufs I'm guessing your messages require a bit more processing/unpacking to get them into a format that can be written to PipelineDB (basically something you can INSERT or COPY into a stream). In that case what most users do is write a consumer that simply transforms messages into INSERT or COPY statements. These writes can be parallelized
heavily and are primarily limited by CPU capacity.
Please feel free to reach out to me (I'm Derek) if you'd like to discuss your workload and use case further, or set up a proof-of-concept--we're always happy to help!
That's awesome! If you don't mind - one more q.. I see that stream-stream joins are not yet supported (http://docs.pipelinedb.com/joins.html#stream-stream-joins). Can you comment on when you think this feature cold land or is it still a ways off?
Sure! So stream-stream JOINs actually haven't been requested by users as much as you'd think. Users have generally been able to get what they need by using topologies of transforms [0], output streams, and stream-table JOINs. Continuous queries can be chained together into arbitrary DAGs of computation, which turns out to be a very powerful concept when mapping out a path from raw input events to the desired output for your use case.
The primary issue in implementing stream-stream JOINs is that we'd essentially need to preemptively store every single raw event that could be matched on at some point in the future. Conceptually this is straightforward, but on a technical level we just haven't seen the demand to optimize for it.
That being said, you could just use a regular table as one of the "streams" you wanted to JOIN on and then use an stream-table JOIN. As long as the table side of the JOIN is indexed on the JOIN condition, an STJ would probably be performant enough for a lot of use cases. With PostgreSQL's increasingly excellent partitioning support this is becoming especially practical.
I also suspect that this is an area where integration with TimescaleDB could be really interesting!
Just out of curiosity, do you have a specific use case that necessitates stream-stream JOINs, or were you just exploring the docs and wondering about this?
My use case is pretty much parallel time series alignment with several layers of aggregation. I guess I perceive stream-stream joins as an easy way for me to wrap my head around how to structure my compute graph, but it seems doable with the method mentioned by @grammr. I'd hope for an interface roughly like "CREATE join_stream from (SELECT slow_str.key AS key, sum(slow_str.val, fast_str.val) AS val FROM slow_str, fast_str INNER JOIN ON slow_str.key = fast_str.key)". I do realize there are some tough design decisions for a system like this, but I'd also like to drop my wacky zmq infrastructure ;)
Great, let's evaluate the deaths caused by regular cars. Lets evaluate risks posed by burning fossil fuels. Let's evaluate pretty much every single little detail about other auto brands and present our findings in a fashion where we can compare and rank best to worst. Having just done this I'm completely satisfied with Tesla's approach in comparison to what other brands have been doing.
I'm not so sure it's a great time at all. Most alzheimer's drugs have failed dramatically. vTv Therapeutics is the most recent I think. They had a Phase 3 drug trial end early due to futility, and their stock tanked hard. In one day I think it lost 70% of its value.
Alzheimer's drugs have not been successful, but in general it is the best time ever to start a biotech company. Returns in biotech VC the last 5 years are better than tech, more IPO and big M&A (three $8-12B startup acq in last 4-6 months) even with 1/5 the funding of software, tons of great science out there, massive amounts of capital ($4B venture funding to startups in Q1)
As far as I'm aware the majority of failing biotech companies have very broadly targeted Amyloid beta (apparently because they thought it was a good risk tradeoff). A narrow focus on APOE4 could be a wiser strategy. I could even see gene therapy playing a role.
If you are CC genotype for marker rs7412 AND marker rs429358 your status is APOE4.
Note that there are several other markers/genotypes that can reduce risk, but research appears to point to APOE4 status having the most impact. Also be aware that genetic reports could always have errors due to chance, operational errors, or problems with original research.