Hacker News new | past | comments | ask | show | jobs | submit | rsanders's favorites login

A minor plug, if you're doing ESP8266/ESP32 development, you can use the following code snippet and server I wrote to do automatic, secure updates over pinned HTTPS:

https://www.pastery.net/vmympk/

This is the server, a single binary (written in Rust):

https://gitlab.com/stavros/espota-server

The device will connect to the server (whenever you call doHttpUpdate(), I usually do it on startup), ask for a new version, get the latest version, flash itself with it and then boot.

Very handy, as it's faster than USB/UART, and you don't have to disconnect from the serial console to use it.


I run 10-15 containers on my Mac and don't notice it after fixing particular containers (I don't doubt there is a more general issue)

Find out what is causing the CPU spikes with

    docker stats
or screen into the Docker for Mac VM and diagnose with top etc.

    screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
I found particular containers were causing the issues and fixed it with alternate builds, prior versions or resource limitations on containers

Docker for Mac/Windows is a great product - it has allowed me to roll out Docker to developers who wouldn't otherwise deal with Vagrant or other solutions


There's a way to encode a protobuf schema in a protobuf message, making it possible to send self-describing messages (i.e. include a serialized schema before each message). I'm not sure if anyone actually does this. See http://code.google.com/apis/protocolbuffers/docs/techniques.... for details.

It's been a while since I used it, but Revel[0] is almost within the same family as Rails. The developers of Revel say they're inspired by the Play Framework.

Another option that looks nice is the Buffalo[1]. It's a collection of libraries that are built to play nice with each other, but you don't have to use all of them as a bundle. You can use only the layers you want.

[0] https://revel.github.io/

[1] https://gobuffalo.io/docs/getting-started


There is a difference of course. While we are still learning about their implementation I think the statements below are true

1. We don't support just stream. You can throw a SQL at a kafka topic as easy as SELECT * FROM `topic` [WHERE ]

2. We support selecting or filter on the record metadata: offset/timestamp/partition (Haven't seen something similar in Confluent KSQL)

3. We integrate with Schema Registry for Avro. We hope to support Hortonworks schema registry soon as well as protobuf.

4. We allow for injecting fields in the Kafka Key part. For example: SELECT _offset as `_key.offset`, field2 * field3 - abs(field4.field5) as total FROM `magic-topic`

5. Just quickly looking at the Confluent KSQL "abs" function i see it accepts Double only. It doublt that everything is converted to Double before it hits the method and then converted back. (too short of a time to understand the whole implementation).

6. Filters: is related to point 2. We allow filter on message metadata. For example: SELECT * FROM topicA WHERE (a.d.e + b) /c = 100 and _offset > 2000 and partition in (2,6,9)

Also not sure if they have customers yet using it. We do.


There's a pretty big difference to a point that Landoop's KCQL (Kafka Connect Query Language) and Confluent's KSQL (Streaming SQL for Apache Kafka) are two different products.

- KSQL is a full-fledged Streaming SQL engine for all kinds of stream processing operations from windowed aggregations, stream-table joins, sessionization and much more. So it does more powerful stream processing on Kafka than what Landoop's product supports which is simple projections and filters.

- KSQL can do that because it supports streams and tables as first-class constructs and tightly integrates with Kafka's Streams API and the Kafka log itself. We are not aware of any other products that do that today, including Landoop's tool.

- We will add support for Kafka connectors so you can stream data from different systems into Kafka through KSQL. This will cover what Landoop intended with KCQL (Kafka Connect Query Language.

- Confluent works with several very large enterprises and many of the companies that have adopted Kafka. We worked with those customers to learn what would solve real business problems and used that feedback to build KSQL. So it . models on real-world customer feedback.

- We'd love to hear feedback. Here's the repository https://github.com/confluentinc/ksql and here's the Slack Channel slackpass.io/confluentcommunity - #ksql

Hope that helps!


If you're using (or want to use) Terraform and consider running k8s on AWS take a look at tectonic-installer[1] and its `vanilla_k8s` setting. My opinion is that it's far better than kops `-target=terraform` output. It's also using CoreOS rather than Debian which seems reasonable.

[1] https://github.com/coreos/tectonic-installer


You can use coreos with kops. Add something like

--image 595879546273/CoreOS-stable-1298.5.0-hvm


The Kubernetes team has been doing a lot of work to make these admin tools less and less necessary. More and more pieces of it can be run from within K8s itself. For example etcd used to need to be set up and managed externally, now it's just inside. And extensions are growing up too; see CRD's in 1.7.

And unlike setup & management tools, it appears that we have a clear "winner" for K8s app management: helm. And there's more overlap than you'd expect. For instance I recently typed "helm install prometheus" and not only did it install prometheus but it installed it with all the hooks necessary to monitor the K8s cluster.

I'm not sure why I can't do the same to get an elasticsearch, logstash & kibana stack (or similar competing stack) set up as a cluster logging solution. AFAICT right now you have to have the right flags set on your kubelet startup script to do this, but that's the sort of thing that I hope & believe that K8s is making better.

And setting up a glusterfs cluster to use as a storage provider also did a surprising amount of its setup in k8s.

Obviously K8s setup can't quite be reduced to a simple `apt-get install kubelet`, but hopefully eventually it isn't much more.


As someone who has used RabbitMQ in production for many years, you should rather consider using NATS [1] for RPC.

RabbitMQ's high availability support is, frankly, terrible [2]. It's a single point of failure no matter how you turn it, because it cannot merge conflicting queues that result from a split-brain situation. Partitions can happen not just on network outage, but also in high-load situations.

NATS is also a lot faster [3], and its client network protocol is so simple that you can implement a client in a couple hundred lines in any language. Compare to AMQP, which is complex, often implemented wrong, and requires a lot of setup (at the very least: declare exchanges, declare queues, then bind them) on the client side. NATS does topic-based pub/sub out of the box, no schema required.

(Re performance, relying on ACK/NACK with RPC is a bad idea. The better solution is to move retrying into the client side and rely on timeouts, and of course error replies.)

RabbitMQ is one of the better message queue implementations for scenarios where you need the bigger features it provides: durability (on-disk persistence), transactions, cross-data center replication (shovel/federation plugins), hierarchical topologies and so on.

[1] http://nats.io

[2] https://aphyr.com/posts/315-jepsen-rabbitmq

[3] http://bravenewgeek.com/dissecting-message-queues/


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: