I often find myself explaining the same things in real life and online, so I recently started writing technical blog posts.

This one is about why it was a mistake to call 1024 bytes a kilobyte. It’s about a 20min read so thank you very much in advance if you find the time to read it.

Feedback is very much welcome. Thank you.

  • Australis13@fedia.io
    link
    fedilink
    arrow-up
    2
    arrow-down
    2
    ·
    edit-2
    6 months ago

    Huh? What does how a drive size is measured affect the available address space used at all? Drives are broken up into blocks, and each block is addressable.

    Sorry, I probably wasn’t clear. You’re right that the units don’t affect how the address space is used. My peeve is that because of marketing targeting nice round numbers, you end up with (for example) a 250GB drive that does not use the full address space available (since you necessarily have to address to up 256GiB). If the units had been consistent from the get-go, then I suspect the average drive would have just a bit more usable space available by default.

    My comment re wear-levelling was more to suggest that I didn’t think the unused address space (in my example of 250GB vs 256GiB) could be excused by saying it was taken up by spare sectors.

    • nous@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      (for example) a 250GB drive that does not use the full address space available

      Current drives do not have different sized addressable spaces and a 256GiB drive does not use the full address space available. If it did then that would be the maximum size a drive could be. Yet we have 20TB+ drives and even those are no where near the address size limit of storage media.

      then I suspect the average drive would have just a bit more usable space available by default.

      The platter size might differ to get the same density and the costs would also likely be different. Likely resulting in a similar cost per GB, which is the number that generally matters more.

      My comment re wear-levelling was more to suggest that I didn’t think the unused address space (in my example of 250GB vs 256GiB) could be excused by saying it was taken up by spare sectors.

      There is a lot of unused address space - there is no need to come up with an excuse for it. It does not matter what size the drive is they all use the same number of bits for addressing the data.

      Address space is basically free, so not using it all does not matter. Putting in extra storage that can use the space does cost however. So there is no real relation between the address spaces and what space is on a drive and what space is accessible to the end user. So it makes no difference in what units you use to market the drives on.

      Instead the marketing has been incredibly consistent - way back to the early days. Physical storage has essentially always been labeled in SI units. There really is no marketing conspiracy here. It just that is they way it was always done. And why it was picked that way to begin with? Well, that was back in the day when binary units where not as common and physical storage never really fit the doubling pattern like other components like ram. You see all sorts of random sizes in early storage media so SI units I guess did not feel out of place.