• 0 Posts
  • 14 Comments
Joined 1 year ago
cake
Cake day: June 18th, 2023

help-circle
  • I think it’s more they’re trying to make a Twitter-killer then kill Mastodon from the inside.

    This is the answer. They aren’t stupid, they know that if they just spin up a Twitter clone, nobody will use it. They need a reason to exist. Honestly I don’t think they give a single shit about Mastodon or killing it. But what ActivityPub does, is get them an instant content base. And if they are building their own AI, it’s a whole lot of live conversation for them to train it on.



  • Yes exactly! I’ve always be in Reddit for the comments sections. 12-13 years ago, back when forums and IRC were what there was (aka, DISCUSSION based communities where low effort posts didn’t blow up, and platforms were run by hobbyists and webmasters catering to communities rather than social media corps desperate for clicks). It was a better time to be online IMHO. I think part of it was that joining web communities was slightly unapproachable, which meant you had to be at least a little smart to realize that you wanted to join that community and figure out how to join it. But I think the format of the sites had a bigger effect in selecting the quality of content that got popular.
    As you say, nonstop torrent of visual candy you can scroll through and click click click getting another ad impression or 12 each time.



  • I think Reddit quality has been declining for some time.

    There are two factors at work I believe. One, once something goes mainstream, you get a much broader set of the population on the platform, and much like real life, the idiots seem to be louder. More importantly though, updates to the platform deprioritized serious conversation in favor of mindless scrolling. Look at the new website, or at the official app. They are not conducive to in-depth conversation. They keep trying to distract you with posts from other communities that you don’t even subscribe to, the goal is obviously to get you to keep clicking clicking clicking rather than spending a bunch of time on one page composing a well thought out reply.

    And that shows. Really high quality in-depth conversations on issues of importance used to be far more common for me on Reddit. Today they are much less frequent, fewer people seem interested in real discussion or debate. And there’s much more of the attitude of ‘you disagree with me there for you’re wrong fuck you’.

    I think the recent protest and beginning of migration are going to make that even more prevalent. I think many of the smarter people who enjoy in-depth discussion and post quality comments are going to migrate to Lemmy or Kbin leaving Reddit full of idiots. I think that will actually be good for Reddit as a company, at least in the short-term, because idiots don’t use ad blockers and they install the official app without thinking. It is of course killing their golden goose, but their actions suggest they have decided they prefer to do without that goose’s continued services.



  • While it has its benefits; is it suitable for vehicles, particularly their safety systems? It isn’t clear to me, as it is a double-edged sword.

    Perhaps, but if you are developing a tech that can save lives, doesn’t it make sense to put that out in more cars faster?

    I would be angry that such a modern car with any form of self driving doesn’t have emergency braking. Though, that would require additional sensors…

    Tesla does this with cameras whether you pay for FSD or not. It can also detect if you’re near an object and slam on gas instead of brake, it will cancel that out. These are options you can turn off if you don’t want them.

    I’d also be angry that L2 systems were allowed in that environment in the first place, but as you say it is ultimately the drivers fault.

    I’m saying- imagine if the car has L2 self driving, and the driver had that feature turned off. The human was driving the car. The human didn’t react quickly enough to prevent hitting your loved one, but the computer would have.
    Most of the conversation around FSD type tech revolves around what happens when it does something wrong that the human would have done right. But as the tech improves, we will get to the point where the tech makes fewer mistakes than the human. And then this conversation reverses- rather than ‘why did the human let the machine do something bad’ it becomes ‘why did the machine let the human do something bad’.

    I would hope that the manufacturer would make it difficult to use L2 outside of motorway driving.

    Why? Tesla’s FSD beta L2 is great. It’s not perfect, but it does a very good job for most parts of driving on surface streets.

    I would prefer they had no self driving rather than be under the mistaken impression the car could drive for them in the current configuration. The limitations of self driving (in any car) are often not clear to a lot of people and can vary greatly.

    This is valid. I think the name ‘full self driving’ is problematic somewhat. I think it will get to the point of actually being fully self driving, and I think it will get there soon (next year or two). But they’ve been using that term for several years now and especially the first few versions of ‘FSD’ were anything but. And before they started with driver monitoring, there were a bunch of people who bought ‘FSD’ and trusted it a lot more than they should have.

    If Tesla offer a half-way for less money would you not expect the consumer to take the cheapest option? If they have an accident it is more likely someone else is injured, so why pay more to improve the self driving when it doesn’t affect them?

    That’s not how their pricing works. The safety features are always there. The hardware is always there. It’s just a function of what software you get. And if you don’t buy FSD when you buy the car, you can buy it later and it will be unlocked over the air.
    What you get is extra functionality. There is no ‘my car ran over a little kid on a bike because I didn’t pay for the extra safety package’. It’s ‘my car won’t drive itself because I didn’t pay for that, I just get a smart cruise control’.

    Tesla is the only company I know steadfastly refusing to use any other sensor types and the only reason I see is price.

    Price yes, and difficulty integrating different data sets. On their higher end cars they’ve re-introduced a high resolution radar unit. Haven’t see much on how that’s being used though.
    The basic answer is they can get to where we need with cameras alone because our software is better than others. For any other automaker that doesn’t have Tesla’s AI systems, LiDAR is important.

    Another concern is that any Tesla incidents, however rare, could do huge damage to people’s perception of self driving.

    This already happens whether the computer is driving or not. Lots of people don’t understand Teslas and think that if you buy one it’ll drive you into a brick wall and then catch on fire while you’re locked inside. Bad journalists will always put out bad journalism. That’s not a reason to stop tech progress tho.

    If Tesla is much cheaper than LIDAR-equipped vehicles will this kill a better/safer product a-la betamax?

    Right now FSD isn’t a main selling point for most drivers. I’d argue that what might kill others is not that Tesla’s system is cheaper, but that it works better and more of the time. Ford and GM both have a self driving system, but it only works on certain highways that have been mapped with centimeter-level LiDAR ahead of time. Tesla has a system they’re trying to make general purpose, so it can drive on any road. So if the Tesla system takes you driveway-to-driveway and the competition takes you onramp-to-offramp, the Tesla system is more flexible and thus more valuable regardless of the purchase price.

    Do you pick your airline based on the plane they fly and it’s safety record or the price of the ticket, being confident all aviation is held to rigorous safety standards? As has been seen recently with a certain submarine, safety measures should not be taken lightly.

    I agree standards should apply, that’s why Tesla isn’t L3+ certified even though on the highway I really think it’s ready for it.


  • Not sure the exact details- I heard they were sampling 10 bits per pixel but a bunch of their release notes talked about photon count detection back when they switched to that system.
    Given that the HW3 cameras started being used to just generate RGB images, I suspect the current iteration is working by just pulling RAW format frames and interpreting them as a photon count grid, from there detecting edges and geometry with the occupancy network.

    I’ve not seen much of anything published by Tesla on the subject. I suspect most of their research they are keeping hush hush to get a leg up on the competition. They share everything regarding EV tech because they want to push the industry in that direction, but I think they see FSD as their secret sauce that they might sell hardware kits but not let others too far under the hood.


  • In our town we had a Tesla shoot through red traffic lights near our local school barely missing a child crossing the road. The driver was looking at their lap (presumably their phone). I looked online and apparently autopilot doesn’t work with traffic lights, but FSD does?

    There’s a few versions of this and several generations with different capability. The early Tesla Autopilot had no recognition of stop signs, it was literally just ‘cruise control that keeps you in your lane’. FSD for sure does recognize stop signs, traffic lights, etc and reacts correctly to them. I BELIEVE that the current iteration of Traffic Aware Cruise Control (what you get if you don’t pay extra for FSD or Enhanced Autopilot) will stop for traffic lights but I could be wrong on that. I know it detects pedestrians but its detection isn’t nearly as advanced as FSD.

    I will give you that in theory, the time-of-flight data from a LiDAR pulse will give you a more reliable point cloud than anything you’d get from cameras. But I also know Tesla is doing things with cameras that border on black magic. They gave up on getting images out of the cameras and are now just using the raw photon count data from the sensor, and with the AI trained it can apparently detect edges with only a few photons of difference between pixels (below the noise floor). And I can say from experience that a few times I’ve been in blackout rainstorms where even with full wipers I can barely see anything, and the FSD visualization doesn’t skip a beat and it sees other cars before I do.

    Would you still feel the same about Tesla if your car injured/killed someone or if someone you care about was injured/killed by a Tesla?

    As a Level 2 system, the Tesla is not capable of injuring or killing someone. The driver is responsible for that.

    But I’d ask- if a Tesla saw YOUR loved one in the road, and it would have reacted but it wasn’t in FSD mode and the human driver reacted too slowly, how would you feel about that? I say this not to be contrarian, but because we really are approaching the point where the car has better situational awareness than the human.

    If we can put extra sensors in and it objectively makes it safer why don’t we? Self driving cars are a luxury.

    For the reason above with the loved one. If you can use cameras and make a system that costs the manufacturer $3000/car, and it’s 50 times safer than a human, or use LiDAR and cost the manufacturer $10,000/car, and it’s 100 times safer than a human, which is safer?
    The answer is the cameras, because it will be on more cars, thus deliver more overall safety.
    I understand the thinking that ‘Elon cheaped out, Tesla FSD is a hack system on shitty hardware that uses clever programming to work around a cut-rate sensor suite’. But I’d also argue- if they can get similar performance out of a camera, and put it on more cars, doesn’t that do more to overall improve safety?

    In the example above, if the car didn’t have the self driving package because the guy couldn’t afford it, wouldn’t you prefer that a decent but better than human self driving system was on the car?


  • Don’t have the paper, my info comes mainly from various interviews with people involved in the thing. Elon of course, Andrej Karpathy is the other (he was in charge of their AI program for some time).

    They apparently used to use feature detection and object recognition in RGB images, then gave up on that (as generating coherent RGB images just adds latency and object recognition was too inflexible) and they’re now just going by raw photon count data from the sensor fed directly into the neural nets that generate the 3d model. Once trained this apparently can do some insane stuff like pull edge data out from below the noise floor.

    This may be of interest– This is also from 2 years ago, before Tesla switched to occupancy networks everywhere. I’d say that’s a pretty good equivalent of a LiDAR scan…


  • My point stands- drive the car.
    You’re 100% right with everything you say. It has to work 100% of the time. Good enough most of the time won’t get to L3-5 self driving.

    Camera only is not authorize in most logistic operation in factory, im not sure what changes for a car.

    The question is not the camera, it’s what you do with the data that comes off the camera.
    The first few versions of camera-based autopilot sucked. They were notably inferior to their radar-based equivalents- that’s because the cameras were using neural network based image recognition on each camera. So it’d take a picture from one camera, say ‘that looks like a car and it looks like it’s about 20’ away’ and repeat this for each frame from each camera. That sorta worked okay most of the time but it got confused a lot. It would also ignore any image it couldn’t classify, which of course was no good because lots of ‘odd’ things can threaten the car. This setup would never get to L3 quality or reliability. It did tons of stupid shit all the time.

    What they do now is called occupancy networks. That is, video from ALL cameras is fed into one neural network that understands the geometry of the car and where the cameras are. Using multiple frames of video from multiple cameras at once, it then generates a 3d model of the world around the car and identifies objects in it like what is road and what is curb and sidewalk and other vehicles and pedestrians (and where they are moving and likely to move to), and that data is fed to a planner AI that decides things like where the car should accelerate/brake/turn.
    Because the occupancy network is generating a 3d model, you get data that’s equivalent to LiDAR (3d model of space) but with much less cost and complexity. And because you only have one set of sensors, you don’t have to do sensor fusion to resolve discrepancies between different sensors.

    I drive a Tesla. And I’m telling you from experience- it DOES work. The latest betas of full self driving software are very very good. On the highway, the computer is a better driver than me in most situations. And on local roads- it navigates them near-perfectly, the only thing it sometimes has trouble with is figuring out when is it’s turn in an intersection (you have to push the gas pedal to force it to go).

    I’d say it’s easily at L3+ state for highway driving. Not there yet for local roads. But it gets better with every release.


  • I’m not sure what kind of serious trouble they are actually in. I have spent most of today being driven around by my Tesla, and aside from the occasional badly handled intersection and unnecessary slowdown it’s doing fucking great. So I would Tell anyone who says Tesla is in serious trouble, just go drive the car. Actually use the FSD beta before you say that it’s useless. Because it’s not. It is already far better than anyone expected vision only driving to be, and every release brings more improvements. I’m not saying that is a Tesla fanboy. I’m saying that as a person who actually drives the car.


  • Agree that rationality is not a safe assumption. None of this has been rational- it feels like Spez is having a temper tantrum (as would a small child) and those around him are desperately trying to channel it into professional-ish actions.
    Also makes sense if Spez is Ellen Pao 2.0- board decides unpopular changes need to be made, so they pay Spez extra to do 120% of what they want and be the fall guy. He goes nuts for a while, then resigns, and is replaced with some suit who looks good on TV and has a bit of social media cred. That guy then says all the right things to the community and walks back 20% of the changes.
    This probably all pushes the IPO back a year or so, but if they think they can increase revenue in that time, it makes some sense.

    At this point though I wouldn’t put anything past Reddit.
    I have to think someone there is smart enough to know if they block fediverse links that’s a huge escalation that makes them OBVIOUSLY the ‘bad guys’ even in the eyes of people who DGAF about the API nonsense.
    From the POV of a 3rd party observer, it COULD be argued that Reddit is just dumping freeloaders, a bunch of the users don’t like it and want shit for free, and it’s a stupid forum drama squabble.
    But as soon as they start actively suppressing competitors, that becomes a lot harder to see as anything other than ‘actively stopping their users who want to leave from leaving’.