Hacker News new | past | comments | ask | show | jobs | submit login

Most of the 8-bit BASICs of the time share a common ancestor. Perhaps making every number a floating point was a reasonable decision for the hardware that the common ancestor BASIC was written for and it just got carried over through the generations.



I think it's more likely that the language had no concept of types so number had to "just work". You can do integer math (slowly) using floating point, but you can't do floating point math with integers. Especially since the language is targeted at beginners who don't really understand how their machines work.

Would have been interesting to see a version of BASIC that encoded numbers as 4 bit BCD strings. Compared to the normal 40 bit floating point format you would save memory in almost every case, and I bet the math would be just as fast or faster than the floating point math in most cases as well. The 4 bit BCD alphabet would be the numbers 0-9, as well as -, ., E, and a terminator and a coupld of open numbers if you can think of something useful. Maybe an 'o' prefix for octal and a 'b' for binary?


If you look at the ads for microcomputer software, there were a lot of business-related stuff, i.e. AR, AP, etc. Stuff that a kid in public school had no idea what those acronyms meant.

If you're writing business software, you'll need to support decimals for currency-related calculations. Tax and interest rates also require decimal values. So floating point helped a lot.

When the 8-bit microcomputers went mainstream, (Apple II, Commodore PET, TRS-80), graphics got more popular - sin(), cos(), and other trig functions are popular and their return values are never normally expressed as integer values.

Sure, most would never write a fast arcade-like game in BASIC, but as a code trial playground, turnaround time was relatively quick.


I don't understand your argument

Especially when doing financial calculations you do not want to use floating point but fix point O_o


Explain fixed-point math to a small-to-medium sized business owner.

With signed 16-bit integers (which Apple Integer Basic provided), you've got a range of 32767 to -32768 (wikipedia says Apple Integer Basic couldn't display -32768). But if do the naive fixed-point using 16-bit ints, you'll have a range of 327.67 to -327.68, assuming 2-digit for decimals.

16-bit integers didn't have enough precision for many of those 1970s/1980s use cases.

yes, floating-point math has problems. but they are well-known problems - those corner cases were well-known back then.


I'd rather explain fixed point math to a small business owner than explain to his accountant why pennies keep randomly disappearing and popping into existence.


You want to try to explain fixed point math to someone who is for the first time discovering the concept of a "variable"?


It only lets you store whole pennies, not fractions of a penny. This is to help stop rounding errors accumulating, and eventually making a calculation wrong. With fixed point, if you put the formula in right, there will be no surprises.


That knowledge was less widespread at the time.


...except that using floating-point values to represent currency was never recommended because of precision issues. Using fixed-point or "integer cents" was preferable.


Atari BASIC used BCD for numbers. It is notably not a Microsoft BASIC descendant. Cromemco BASIC is another example.


Mostly that ancestor would be MS Basic on the altair.



Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: