Important to remember for each data byte in the buffer to also store the LSR along with that data. Windows serial port driver fails to do this, meaning you can tell there was (for example) a parity error on a byte in your fifo, you just don't know on which byte.
How would you implement that though, assuming memory is tight? The simplest would be to just store a multibyte entry for each item in the ring buffer: the actual data, and the status flags set at the time it was copied. But that doubles your memory requirements, and you'll (probably) not need that status info all the time.
I suppose you could define 'normal' and keep a smaller structure which holds only abnormal values, but then you need a way to map which input byte each value refers to, and make sure it's invalidated when the ring buffer rolls around.
Or you could put some of the error-handling logic into your buffering code (but it'd be in an interrupt handler, which you probably don't want), and you might need your higher level framing or packetisation to know how far back you need to invalidate.
I'm not sure there is a single optimal solution (although if you know of one, I'd love to know). Windows getting it wrong is interesting/odd though, unless it just dates to a time when tehy were cpu/memory restricted, adn it's just stuck about for compatibility.
I assume that your serial port will run some kind of machine-to-machine protocol, and not just a serial console with user data (for which, frankly, I'll just ignore most errors). These serial prototols (like modbus, or the 10000s of ad-hoc specified "proprietary" protocols many devices speak) are usually very simple.....
And if memory is tight, you'll probably also want to move some part of your protocol handling down the stack, probably up to the interrupt handler, because otherwise you'll needlessly have double buffers (characters, and decoded protocol datagrams). For most stuff running on serial ports the code will be simple enough not to create too many headaches. Then just store the fact that you've encountered a parity error next to your datagram. (msg->flags |= MSGFLASG_PARITYERR );
And yes, I know that this is a "blatant layering violation" for a proper operating system, but on a microcontroller really no one cares that you have two variations of an interrupt handler, specific to your protocol handling.
EDIT/ADDED: It's not only parity errors, but there are also things like RS485 links with 9-bit addresses, BREAK characters with semantic meaning and sometimes your communication protocol might specify when you switch directions of half-duplex links, with hard timing requirements.
Thanks for this. I was happy to see 'Patterns for Time-Triggered Embedded Systems' on your list, though the link appears broken - new link: [1]. I remember working through that book and porting everything from C to Assembly (my boss at the time was too cheap to buy me the nice Keil compiler). Once I had that library in hand, I was knocking out his projects in days instead of weeks.
That looks interesting, since one website said that lack of Tk/Tcl was the obstacle.
I'm not desperate, for now. My interest was piqued when I got interested in Python and Android almost simultaneously, was delighted to see that Android 3 would support Python, and bought an Android 3 tablet.
All of this is just in the name of recreational programming. My weird dream is to be able to write and run my code on any computer. But an un-anticipated side issue is that coding on a tiny screen is no fun at all at my advanced age.
I wear one. I'd never hear the term "wreath beard" before reading this article. Terms I've seen for it are: tauferbard[1], chin curtain, donegal, lincoln[2], alaskan whaler[3], shenandoah, and spade beard[4].
A repeating field is a field that contains multiple values of the defined type - eg. the column is of type integer, but the field contains multiple integer values.
Mainstream relational databases does not support repeating fields, so any table will be in 1NF by default. Some non-relational databases supports repeating fields.
You can simulate a repeating field in a relational database by e.g. having the column be of type string, and then have multiple comma separated values in the field. But strictly speaking this is not a repeating field, since the column type is string, and there is only one string. It is still bad design though!
The renaissance fair in Muskogee always has Bud's Homemade Root Beer in from Alton, IL. The stuff is amazing and unlike any commercial root beer I've tried. http://www.budsrootbeer.com/
I'm in casino gaming. We have to send our source and tools to regulatory test labs so they can (hopefully) independently generate the same binary as what we are delivering. Given our tools (C++ and Windows), 'binary reproducibility'[1] is impossible, but we've got a workaround. We do our release builds on a VirtualBox that's all tooled up. When it comes time to deliver to the lab, we export the entire box (with source already synced) as an .ova. Part of our build pipeline is a tool that strips things like timestamps and paths from the PE files. Some people don't go to all this trouble and instead use tools like Zynamics BinDiff to explain away the diffs.
What are the companies that provide this service (reproducing builds)? I haven't heard of this, but sounds interesting.
Depending on how much effort you're willing to put in, even if you use C++ and Windows, you can still write a program to parse the executable and zero out timestamps and other non-deterministic data. That is actually being done in a BitCoin-related program for Windows I believe.
How do you generate and verify the VirtualBox? If you send the image over to the test lab, then the obvious thing to do is for someone to attack your VirtualBox, and you have the same problem all over again, just at a different level.
For jurisdictions that don't have their own state-run labs (so not NV, NJ, PA, etc.) everybody uses one or a mix of GLI[1], BMM[2], and Eclipse[3] Note: I'm only familiar with US gaming.
We do have a tool to zero these parts of the executable files out, but in our testing we still had unexplainable differences unless we were on the same machine working from the same sync.
The VirtualBox was generated once (installed Windows, Visual Studio, .NET, some others) and we just continue to use the same base .ova.
The package has to be sent to the lab on physical media where it gets loaded onto an offline machine that we've supplied.
This works for your goal (being able to reproduce the binary build), but in Mozilla's case it's slightly different.
Being FLOSS software, Mozilla's goal is that end-users can completely reproduce the builds from source. This includes dependencies, toolchains, AND the build environment. In this scenario, accepting a pre-build binary VM would not be acceptable, since it defeats the spirit of FLOSS.
I used to work in the same industry. We used linux and gcc, so we could, and did, produce fully deterministic builds. Actually the output was fully deterministic disk images.
I did one iteration of the build system, mostly making it such that any host could build it deterministically. This was years ago so it was just chroot that started with a skeleton + GCC and procedurally built the things it needed to build the outputs. Was fairly straight forward, just an extremely short patch here and there, a 1000 line Xorg Makefile for staging Xorg builds. If I was doing it again I'd consider reusing a package manager, but each components Makefile was pretty concise. My trusty sidekick was a script that xxd'd two files into pipes that it opened using vimdiff.
So the regulators have to use the provided virtual machine and tools to build the source, and verify that the resulting binary is the same as provided by your company?
How do they confirm that the toolchain has not been messed with? Surely they can't binary-check the whole OS/compiler/linker/other software in the VM?