Protocol cannot be open source. It can have a full specification published. A protocol does not have a source code, it's something that its implementation can have.
Also, what the heck does it do? Neither README nor title says anything about that. The closest thing to such description is mentioning that it "supports multimedia", whatever that means.
Please don't go into attack mode here. The last thing someone needs when they take the risk of sharing their work on a large public forum is someone pettily berating them and then hounding them when they try to reply. If you were genuinely interested in getting information or providing feedback, there are lots of ways to do it that don't come across as just wanting to pound something.
Edit: it looks like we've already had to warn you about not being a bully on Hacker News. Moreover, it looks like you've been posting like this a lot. This is something we ban accounts for if people keep doing it, so please take the spirit of this site to heart and treat your fellow community members better from now on.
> Edit: it looks like we've already had to warn you about not being a bully on Hacker News. Moreover, it looks like you've been posting like this a lot.
OK then, please be more specific here. You're being so generic that I can't
tell what exactly is wrong with my posts (apart from being disliked) and what
should I change to keep me from being banned. The only thing I can think of is
to stay away from any commenting at all. If you criticise from your high
horse of being a moderator and wielding the power to ban people, at least be
constructive in doing so.
Ciao dozzie, I have omitted "implementation" from the title because I thought that was obvious enough. It seems not.
The project proposes both an independent research in defining a specification, and also an open-source implementation based on that specification.
If you want to know what it does, read the specifications or at least the first sentences of the README :)
"PJON® (Padded Jittering Operative Network) is an Arduino compatible, multi-master, multi-media network protocol. It proposes a Standard, it is designed as a framework and implements a totally software emulated network protocol stack that can be easily cross-compiled on many architectures like ATtiny, ATmega, ESP8266, ESP32, STM32, Teensy, Raspberry Pi, Linux, Windows x86 and Apple machines. It is a valid tool to quickly and comprehensibly build a network of devices. Visit wiki and documentation to know more about the PJON Standard."
I think the OP’s critique of your README being vague is fair. Although judging from the downvotes, he comes across a little harsh.
Your README is needlessly abstract. It’s tantamount to saying you’ve invented a new wheel.
Well, wheels have all sorts of specifications, they're made of various materials, can have different weights, etc. They also have all sorts of use cases. From plastic wheels used in toys, rubber wheels used in cars, all the way to various wheels used for the construction of a rocket.
Although conceptually the wheel is an object that allows for easy rotation, you wouldn't use a plastic toy wheel for the construction, say of an automobile, would you? Also, not all wheels are used as rotary devices. Some can be used for just support.
Going back to your network protocol. What does this protocol allow one to do? What can it be used for? Be specific!
PJON is a general purpose "wheel" designed to work on a "charriot" (ATtiny85) as well as on a "formula 1" (real time operative system like linux, winx86, macosx). It has an "all weather tire" able to run on "mud", "tarmac" and "sand" (it operates layer 2 or the data-link agnostically).
Probably for this reason the README seems too general and not specific. The protocol is made NOT to be specific and its implementation is done to be executed (the same codebase) everywhere being 100% software "defined" or "emulated".
"... is an Arduino compatible, multi-master, multi-media network protocol. It proposes a Standard, it is designed as a framework and implements a totally software emulated network protocol stack that can be easily cross-compiled on many architectures like ATtiny, ATmega, ESP8266, ESP32, STM32, Teensy, Raspberry Pi, Linux. It is a valid tool to quickly and comprehensibly build a network of devices". If this is a wall of words that says nothing about what the project is for, we are not from the same universe. And also, I don't think an omitted word in a title that must be short will make all the project an incomprehensible garbage, considering the amount of time and effort has been invested in docs and specs.
I am sorry dozzie, did not want to be aggressive or dry, just I don't get what you mean. PJON works pretty much everywhere. The use you can do of PJON depends on your needs and the architecture that will execute it. In some cases PJON can be useful to "wire a bucket of Arduinos pin-to-pin to each other" but can also be used to create a virtual network computers running winx86, macosx or linux operating over the internet infrastructure. I think you would need some time to go through the available documentation and spec to have a clearer view of its features and limits.
Immediately after the blurb is a section with key properties/selling points, which among other things explains the "multi-media" part. The wording of the description and readme isn't perfect (I suspect the author is not a native english speaker?), but it does contain the details necessary.
Ciao Detaro, thank you for your comment. Could you please point out the incosistencies you have found? I would be happy to fix that, yes I am not a native english speaker sadly :(
But I too was confused by what this PJON thing was supposed to do, until I looked up i2c, because of this line early in the README: "It was originally developed as an open-source alternative to i2c and 1-Wire"
"The Inter-integrated Circuit (I2C) Protocol is a protocol intended to allow multiple “slave” digital integrated circuits (“chips”) to communicate with one or more “master” chips. Like the Serial Peripheral Interface (SPI), it is only intended for short distance communications within a single device. Like Asynchronous Serial Interfaces (such as RS-232 or UARTs), it only requires two signal wires to exchange information."
So it's a very low level, _literal_ wire protocol, I guess. I don't know much about hardware.
I understood what the project is about as soon as I looked at the first para of README. I guess it takes some patience and some topic related knowledge to get the most of READMEs. You can't really expect the software authors to explain everything in a README, like their audience are 12 year olds.
Of course it doesn't need to be audio nor video. It doesn't change the fact
that you used a term that has more than one meaning without providing any
context in which to interpret it. If somebody misreads it, then the problem is
with your prose, not with the reader.
If only there was a certificate authority management tool that was convenient
to use from command line and through an API, so it could be made into
a company-wide service.
There is this old tinyCA that comes with OpenVPN, but it's awful and can't do
much (I don't even remember if it could revoke a certificate). There are a few
instances of WWW-only CAs, and there are desktop/GUI applications. But command
line? /usr/bin/openssl only, and it's unwieldy. Even worse situation with
a CA library.
People like to fetishize OpenSSH's CA (for both client keys and server keys),
but there still a lot to do before it becomes usable. (Though the same stands
for the traditional save-on-first-use method, honestly.) You're basically
proposing to deploy software that maybe will be usable in a few years, with
a big "maybe", because until now it haven't materialized.
> And unless you're aggressively tracking your distro's package releases you'd better hope that the new libdep doesn't introduce any breaking bugs.
Or use a distribution that does not break shit left and right, like Debian or
Red Hat (CentOS).
> Unless you have the wall clock time to actually define and test supported distributions you probably want to pretend the system python doesn't exist.
If you write software that will be run by others (which usually means open
source, probably libraries), yes. If you write software that will only be run
by you (pretty much all dynamic websites a.k.a. webapps land in this
category), you don't want to have three different distributions in half
a dozen different versions anyway, so you can pin yourself to the target
environment just as well.
> If you write software that will be run by others (which usually means open source, probably libraries), yes. If you write software that will only be run by you (pretty much all dynamic websites a.k.a. webapps land in this category), you don't want to have three different distributions in half a dozen different versions anyway, so you can pin yourself to the target environment just as well.
Yeah, my primary software development mode is 1) gather opensource software, 2) build web app or platform-specific app.
> Or use a distribution that does not break shit left and right, like Debian or Red Hat (CentOS).
It's crazy that this even has to be said in this day and age. I suppose distro specific patches to maintain a secure and stable api has gone away in favor of some vendored static dependencies that may or may not ever be upgraded, and rebasing your dependencies introduces the same set of problems.
I foresee dynamically linked go with platform specific binary packaging becoming the future. Recompiling the same bits of software ad infinitum is probably going to get old.
My industry, vfx, isn't as ubiquitous as web development, but it is fairly large with commercial interests to motivate making the "right" decisions and not too odd of a use-case. I've heard some second-hand stories about Guido crashing on the couch of some guys at ILM back in the "early days" helping to spur adoption. However, we're still pretty solidly on Python2 with proposals to start adopting Python3 next year. Most of the commercial applications run Python inside them and the apps are very, very scriptable.
Houdini is a fairly ubiquitous tool for creating dynamics; fire, water, destruction, dust and has been around since 1996. In general, Houdini users tend to prefer Ubuntu and most medium to large vfx houses use CentOS/RHEL (Pixar, DreamWorks, ILM).
For Linux and macOS, Houdini uses your system's Python but it has to monkey patch parts of the standard library in order to work smoothly. You can set a flag (and on Windows this is the default) where it will use the Python shipped with the application.
At my current job we're using slightly out of date versions of everything, which is quite common in production. We're running CentOS 7.2, Houdini 15.5 (released Nov 2017), and Python 2.7.5. If you call "httplib.HTTPSConnection('google.com')" it throws an exception because of ssl library changes across 2.7. I haven't tried any other versions, but the requirements for the current version of Houdini says "CentOS 6+ (64-bit)" (among other OSes).
This isn't unique to Python, but it's very hard to properly support real-world clients and dynamic libraries among operating systems, commercial applications, and less well funded open source or independent efforts.
My M.O. is if a technology is core to your business, you should control it. Which means not using the OS' Python. Upgrading Python and OS independently makes so many things much easier when you have a large codebase.
> Or use a distribution that does not break shit left and right, like Debian or Red Hat (CentOS).
Sadly, there are a community of devs who want to rely on libflakey 0.0.1-beta, released an hour ago, and think that waiting for APIs to stabilise & distros to securely package things is unprofessional (!).
Using libflakey (lol!) is not necessarily a bad thing; The proper course would be to pull out what you need, make sure the interface you care about is sane and works well; you can revisit the upstream in the future.
Raw npm-style clone-from-master seems to be prolific, though.
> Those were never necessary for operational purposes. If you were selling your users to get Google's analytics, that's a different matter.
What about simple session cookies? You need to give end-users the information on how your service uses cookies, if I understand it correctly.
> Don't collect user's data, then you don't need privacy statements nor EULAs about that.
That would be optimal, of course – and I'm not even sure if it saves you from having a privacy statement – but if you have something like a login form, you'll need to collect email addresses (or something else users can use to reset their lost passwords). This is personal information, which is subject to GDPR.
> Are you Microsoft or Homebrew team that you steal users' data unless opted out?
GDPR mandates opt-in to almost everything. And you need to be explicit about what you are doing with the data, in order to be able to provide opt-in.
> Open source that doesn't steal users' data is already GDPR-compatible.
>> Those were never necessary for operational purposes. If you were selling your users to get Google's analytics, that's a different matter.
> What about simple session cookies? You need to give end-users the information on how your service uses cookies, if I understand it correctly.
No, from what bigger half of the internets says, you don't need consent for
session cookies (the ones that are necessary for login form).
> if you have something like a login form, you'll need to collect email addresses (or something else users can use to reset their lost passwords). This is personal information, which is subject to GDPR.
Nope. For keeping login (especially if you don't require logging in) you
don't need separate explicit consent.
>> Open source that doesn't steal users' data is already GDPR-compatible.
Nah. You can screw with the BEAM’s reduction-scheduling even in pure Erlang code: just write a really long function body consisting only of arithmetic expressions (picture an unrolled string hash function.) Since it contains no CALL or RET ops, the scheduler will never hit a yield point, even after going far “into the red” with the current process’s reduction budget.
You just never see this in real Erlang code, because who would code that way? If you want to be idiomatic, use recursion. If you want to be fast, use a NIF. Why ever use an unrolled loop body?
But it can happen, and therefore, the BEAM does not guarantee preemption, even in Erlang. Reduction scheduling isn’t a “kind of” preemptive scheduling. It’s its own thing.
I mean, like i said, it should be treated as a preemptive VM in practice. In reality its not, but for most practical cases, its easier to understand it terms of preemption.
> Samsung stopped making drivers for their MFCs so I needed to toss a perfectly working laser MFC because it stopped working with Linux, just like that.
Erm... The old drivers stopped working with this particular device?
I had a very similar case with perfectly good HP laser printer, which doesn't
work on Window 10 anymore because... dunno? No drivers from HP, though. I'm
sure it would work just fine with generic PCL or PostScript driver under CUPS.
> Later, a HP MFC caused endless pain, seemingly every other Arch update broke one of Bluetooth / printer of MFC / scanner of MFC.
Well, that's probably self-inflicted because of your choice of Arch, not
because Linux (e.g. Debian).
> Plain Wifi eventually worked more or less (but see the endless string of bugs with 5GHz) but enterprise wifi always has been a pain.
Enterprise Wi-Fi has always been a pain, also under Windows.
> The strange F5 VPN our company used was not particularly Linux friendly
VPNs are usually that way. Very few companies can write sensibly working
software that would run under Linux.
> -- I could only get it to work by running Firefox as root (yuck!).
You get pretty much the same under Windows, though you don't see it as
clearly.
I don't get why you bash Linux. Windows has the exact same problems.
> Erm... The old drivers stopped working with this particular device?
Samsung simply stopped producing Linux drivers. Indeed if you look at https://www.bchemnet.com/suldr/index.html there is more than a few years gap here. Also, if you look at the newer drivers support page now that some models have maximum support versions https://www.bchemnet.com/suldr/supported.html where I persume you are ... if you update -- and "obviously" you can't not to update because eventually some API breaks the driver. My printer broke in 2010 with an Ubuntu update.
> Very few companies can write sensibly working software that would run under Linux.
Which is the problem itself. You got it in one.
> You get pretty much the same under Windows, though you don't see it as clearly.
It's possible I do not see it clearly but the only Bluetooth problem I had was the April Creators update mysteriously changing Chrome to use the internal soundcard which was solved in two clicks in Eartrumpet (which was new to me -- finding that software took a little time). I have yet to meet any of the problems listed: every wifi and VPN I have yet seen have Windows support (and IT is so much more prepared to help if there is a problem), the Bluetooth stack actually didn't break, nor have Windows upgrades haven't broken my MFC yet (although I guess I need to wait -- but how long? I have seen people install HP LaserJet 4 on Win 10 with some struggle). And as I mentioned, my Thunderbolt eGPU just works. Are you saying it would just work on Linux...? Come now.
I love Linux to pieces and I run it on servers and use the userspace components still but I am writing to warn people: it is still not the year of Linux on desktop and probably never will be. Or, if you so prefer, it finally is, it's just Linux on the Windows desktop.
Why? Because you want somebody to be able to censor out your whole website
without notifying you and without giving you any meaningful way to protest (CA
revoking your certificate and giving you as much support as Google gives to
its non-paying users or PayPal to the sellers).
Now we have the same situation with DNS, but let's add more choking points, it
surely is a good idea.
Vacation is not about having less work. Vacation is about doing something
different than you do usually, on a different schedule and probably with no
big expectations to meet. Helping one's uncle to build a shed can be vacation
to some people. Writing a program one wanted to write for a long time can be
vacation, too. Spending three weeks in one's shop on woodworking can be
a fulfilling vacation as well. So can be going on an international tour or
doing nothing but reading seven different novels one after another.
Getting out of the daily routine, by either laying next to the pool in an all-inclusive hotel or being busy all day building something in the sun, changes your perspective and increases your stress resilience in my experience. Burnout is often the result of one's inability to put tasks aside for a few days.
My guess is that it's because learning a different paradigm is difficult and
you don't see its benefits until you're proficient with it. It's like learning
functional or logic programming.
Ansible gives you merely a way to execute commands on remote servers. Much
touted idempotency is not a game changer, it's quite easy to achieve even if
you write everything yourself.
CFEngine requires a different mindset, you need to (a) think of the servers
running independently and (b) configure groups (classes) of servers, not
individual servers, even grouped in a list of some kind, like Ansible works.
Suddenly an environment becomes much easier to manage.
Also, what the heck does it do? Neither README nor title says anything about that. The closest thing to such description is mentioning that it "supports multimedia", whatever that means.