Currently studying CS and some other stuff. Best known for previously being top 50 (OCE) in LoL, expert RoN modder, and creator of RoN:EE’s community patch (CBP).

(header photo by Brian Maffitt)

  • 0 Posts
  • 29 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle





  • Submitted for good faith discussion: Substack shouldn’t decide what we read. The reason it caught my attention is that it’s co-signed by Edward Snowden and Richard Dawkins, who evidently both have blogs there I never knew about.

    I’m not sure how many of the people who decide to comment on these stories actually read up about them first, but I did, such as by actually reading the Atlantic article linked. I would personally feel very uncomfortable about voluntarily sharing a space with someone who unironically writes a post called “Vaccines Are Jew Witchcraftery”. However, the Atlantic article also notes:

    Experts on extremist communication, such Whitney Phillips, the University of Oregon journalism professor, caution that simply banning hate groups from a platform—even if sometimes necessary from a business standpoint—can end up redounding to the extremists’ benefit by making them seem like victims of an overweening censorship regime. “It feeds into this narrative of liberal censorship of conservatives,” Phillips told me, “even if the views in question are really extreme.”

    Structurally this is where a comment would usually have a conclusion to reinforce a position, but I don’t personally know what I support doing here.




  • Typically no, the top two PCIE x16 slots are normally directly to the CPU, though when both are plugged in they will drop down to both being x8 connectivity.

    Any PCIE x4 or X1 are off the chipset, as well as some IO, and any third or fourth x16 slots.

    I think the relevant part of my original comment might’ve been misunderstood – I’ll edit to clarify that but I’m already aware that the 16 “GPU-assigned” lanes are coming directly from the CPU (including when doing 2x8, if the board is designed in this way – the GPU-assigned lanes aren’t what I’m getting at here).

    So yes, motherboards typically do implement more IO connectivity than can be used simultaneously, though they will try to avoid disabling USB ports or dropping their speed since regular customers will not understand why.

    This doesn’t really address what I was getting at though. The OP’s point was basically “the reason there isn’t more USB is because there’s not enough bandwidth - here are the numbers”. The missing bandwidth they’re mentioning is correct, but the reality is that we already design boards with more ports than bandwidth - hence why it doesn’t seem like a great answer despite being a helpful addition to the discussion.


  • Isn’t this glossing over that (when allocating 16 PCIe lanes to a GPU as per your example), most of the remaining I/O connectivity comes from the chipset, not directly from the CPU itself?

    There’ll still be bandwidth limitations, of course, as you’ll only be able to max out the bandwidth of the link (which in this case is 4x PCIe 4.0 lanes), but this implies that it’s not only okay but normal to implement designs that don’t support maximum theoretical bandwidth being used by all available ports and so we don’t need to allocate PCIe lanes <-> USB ports as stringently as your example calculations require.

    Note to other readers (I assume OP already knows): PCIe lane bandwidth doubles/halves when going up/down one generation respectively. So 4x PCIe 4.0 lanes are equivalent in maximum bandwidth to 2x PCIe 5.0 lanes, or 8x PCIe 3.0 lanes.

    edit: clarified what I meant about the 16 “GPU-assigned” lanes.



  • Sure, but not much of that battery improvement is coming from migrating the APU’s process node. Moving from TSMC’s 7nm process to their 6nm process is only an incremental improvement; a “half-node” shrink rather than a full-node shrink like going from their 7nm to their 5nm.

    The biggest battery improvement is (almost definitely) from having a 25% larger battery (40Whr -> 50Whr), with the APU and screen changes providing individually-smaller battery life improvements than that. Hence the APU change improving efficiency “a little”.


  • They were careful with how they phrased it, leaving the possibility of a refresh without a performance uplift still on the table (as speculated by media). It looks like the OLED model’s core performance will be only marginally better due to faster RAM, but that the APU itself is the same thing with a process node shrink (which improves efficiency a little).


    See also: PCGamer article about an OLED version. They didn’t say “no”, and (just like with the previously linked article), media again speculated about a refresh happening.

    It looks like they were consistent with what they were talking about with how it wasn’t simple to just drop in a new screen and leave everything else as-is, and used that opportunity to upgrade basically everything a little bit while they were tinkering with the screen upgrade.





  • MHLoppy@fedia.iotoAsk Lemmy@lemmy.world*Permanently Deleted*
    link
    fedilink
    arrow-up
    79
    arrow-down
    6
    ·
    edit-2
    9 months ago

    Yes, though just nitro basic. Discord doesn’t show ads and claims to not sell my data. While I can afford to do so, I’d much rather pay a few bucks a month to keep it that way.

    The number of people in this thread aggressively against a free-to-use service having any kind of way to pay employees and server bills makes me fucking depressed, and helps to explain why most free services I enjoy never seem to stay afloat with just an optional payment-based membership thing.

    Edit: To people suggesting less corporate-based (whether FOSS or not) alternatives, that’s totally cool! Just remember that the people behind these projects need some way to pay the bills the same way the corporate ones do, so I encourage you to contribute to them, whether that’s through e.g., code improvements (which doesn’t pay bills but is still helpful!) or plain old donations.


  • Hahaha, I think you’re giving me a bit too much credit - I was just curious enough to run some tests on my own, then share the results when I saw a relevant post about it!

    My interest in image compression is only casual, so I lack both breadth and depth of knowledge. The only “sub-field” where I might quality as almost an actual expert is in exactly what I posted about - image compression for sharing digital art online. For anything else (compressing photos, compressing for the purpose of storage, etc) I don’t really know enough to give recommendations or the same level of insight!

    Edit: fixed typo and clarified a point.


  • It depends a lot on what’s being encoded, which is also why different people (who’ve actually tested it with some sample images) give slightly different answers. On “average” photos, there’s broadly agreement that WebP and MozJpeg are close. Some will say WebP is a little better, some will say they’re even, some will say MozJpeg is still a little better. Seems to mostly come down to the samples tested, what metric is used for performance, etc.

    I (re)compress a lot of digital art, and WebP does really well most of the time there. Its compression artifacts are (subjectively) less perceptible at the level of quality I compress at (fairly high quality settings), and it can typically achieve slightly-moderately better compression than MozJpeg in doing so as well. Based on my results, it seems to come down to being able to optimize for low-complexity areas of the image much more efficiently, such as a flatly/ evenly shaded area (which doesn’t happen in a photo).

    One thing WebP really struggles with by comparison is the opposite: grainy or noisy images, which I believe is a big factor in why different sets of images seems to produce different results favoring either WebP or JPEG. Take this (PNG) digital artwork as an extreme example: https://www.pixiv.net/en/artworks/111638638

    This image has had a lot of grain added to it, and so both encoders end up with a much higher file size than typical for digital artwork at this resolution. But if I put a light denoiser on there to reduce the grain, look at how the two encoders scale:

    • MozJpeg (light denoise, Q88, 4:2:0): 394,491 bytes (~10% reduction)
    • WebP (light denoise, Picture preset, Q90): 424,612 bytes (~29% reduction)

    Subjectively I have a preference for the visual tradeoffs on the WebP version of this image. I think the minor loss of details (e.g., in her eyes) is less noticeable than the JPEG version’s worse preservation of the grain and more obvious “JPEG compression” artifacts around the edges of things (e.g., the strand of hair on her cheek).

    And you might say “fair enough it’s the bigger image”, but now let’s take more typical digital art that hasn’t been doused in artificial grain (and was uploaded as a PNG): https://www.pixiv.net/en/artworks/112049434

    Subjectively I once again prefer the tradeoffs made by WebP. Its most obvious downside in this sample is on the small red-tinted particles coming off of the sparkler being less defined, [see second edit notes] probably the slightly blockier background gradient, but I find this to be less problematic than e.g., the fuzz around all of the shooting star trails… and all of the aforementioned particles.

    Across dozens of digital art samples I tested on, this paradigm of “WebP outperforms for non-grainy images, but does comparable or worse for grainy images” has held up. So yeah, depends on what you’re trying to compress! I imagine grain/noise and image complexity would scale in a similar way for photos, hence some of (much of?) the variance in people’s results when comparing the two formats with photos.


    Edit: just to showcase the other end of the spectrum, namely no-grain, low complexity images, here’s a good example that isn’t so undetailed that it might feel contrived (the lines are still using textured [digital] brushes): https://www.pixiv.net/en/artworks/112404351

    I quite strongly prefer the WebP version here, even though the JPEG is 39% larger!

    Edit2: I’ve corrected the example with the sparkler - I wrote the crossed out section from memory from when I did this comparison for my own purposes, but when I was doing that I was also testing MozJpeg without chroma subsampling (4:4:4 - better color detail). With chroma subsampling set to 4:2:0, improved definition of the sparkler particles doesn’t really apply anymore and is certainly no longer the “most obvious” difference to the WebP image!


  • I think in this context (particularly with a very quick skim of the paper for some additional context), it it might be more helpful to think of air “powering” this design in the same way that electricity “powers” things. The focus isn’t on the energy source, it’s on the structural design of the “robot” itself.

    Consider it another way: if their system/model/whatever designed a conventional electrically-powered robot without also designing an electrical generator or batteries etc, would you still discount it as “not being a robot”? The problem might be in our expectation based on the language being used. I might also be full of crap haha, but hopefully that’s another perspective to consider.