Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Twitter has restrictions that make perfect sense given its use, but would be really strange decisions for a protocol: ie. 140 character limit.


The 140 character limit is so that Tweets can fit in a 160 character SMS, with space left over for a username. SMS was originally a popular way to use Twitter. (And I think you still can?)


SMS is actually 140 octets, but the GSM 7-bit encoding that's typically used means you can get 160 characters. If you use characters outside that set, its switches to 16-bit UCS-2 so you are limited to 70 characters.

In reality you can send messages longer than this and they will be split into multiple messages over the wire - however in the US where you had (have?) to pay to receive messages it meant you would be charged for each individual message.

TL;DR; These limits may have made sense for the MVP, but as soon as most people moved to IP clients they were obsolete.


"SMS was originally a popular way to use Twitter."

I use SMS->tweet all the time. There is also a set of commands you can use over SMS to talk to twitter. [0] Doing so makes more sense (to me) as SMS is a reliable protocol over mobile networks.

[0] https://support.twitter.com/articles/14020


Well, one of the big benefits of SMS twitter was you could subscribe with no account. We used this for our systems notification twitter account to send out status updates during downtime or any time when we were pretty sure our user base couldn't access our status page through normal means.

Mildly annoying was at the time there wasn't a way to see how many SMS subscribers you had. Don't know if that has changed or not, but it left us constantly wondering how many we had outside of our IRL headcount.


DNS was limited to 512 bytes for a long time. Not necessarily a strange decision at all.


It's curious but IRC servers also seem to have that limit (for a total message payload size).

I can't seem to track down a /reason/ for this common limit. Systems were a lot smaller back in the day, but 512 is fairly easy to hit and I'd honestly expect something in the range of 1-8 KB to be the actual limit.


It's in the spec. From RFC 791, "Internet Protocol version 4", page 13 (https://tools.ietf.org/html/rfc791):

    Total Length is the length of the datagram, measured in octets,
    including internet header and data.  This field allows the length of
    a datagram to be up to 65,535 octets.  Such long datagrams are
    impractical for most hosts and networks.  All hosts must be prepared
    to accept datagrams of up to 576 octets (whether they arrive whole
    or in fragments).  It is recommended that hosts only send datagrams
    larger than 576 octets if they have assurance that the destination
    is prepared to accept the larger datagrams.

    The number 576 is selected to allow a reasonable sized data block to
    be transmitted in addition to the required header information.  For
    example, this size allows a data block of 512 octets plus 64 header
    octets to fit in a datagram.  The maximal internet header is 60
    octets, and a typical internet header is 20 octets, allowing a
    margin for headers of higher level protocols.
Note: That was published in 1981, when internetwork speeds were likely around 1 Mbps or lower.


well, DNS now supports EDNS, which if specified, allows for packets of up to max UDP packets sizes (though in practice this isn't larger than 4096). The larger a UDP packet, the more likely there will be fragmentation at each hop, thus increasing the risk of losing the packet in transit. To reduce this, staying under the MTU of the network is desirable, 1400-1500 bytes for most people.

Though, b/c of jerks DDOSing systems, and reflection/amplification attacks, some DNS servers are requiring TCP for any packets larger than 512.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: