I can spiral my tongue, so that the front part is fully upsidr down - but only to the left. I can’t rotate it to the right at all for some reason, it’s like the equivalent muscles are missing.
I can spiral my tongue, so that the front part is fully upsidr down - but only to the left. I can’t rotate it to the right at all for some reason, it’s like the equivalent muscles are missing.
After Twitter went to shit, where else do customers have to go for customer support like this?
Admittedly, I didn’t read the article, but I have seen plenty of other cases woth cloudfare or other big providers where people have only been able to set things right by kicking up a fuss on social media — like that recent one with amazon aws.
What was it? I’m planning to do a nextcloud deployment via helm soon.
sn1per is not open source, according to the OSI’s definition
The license for sn1per can be found here: https://github.com/1N3/Sn1per/blob/master/LICENSE.md
It’s more a EULA than an actual license. It prohibits a lot of stuff, and is basically source-available
.
You agree not to create any product or service from any par of the Code from this Project, paid or free
There is also:
Sn1perSecurity LLC reserves the right to change the licensing terms at any time, without advance notice. Sn1perSecurity LLC reserves the right to terminate your license at any time.
So yeah. I decided to test it out anyways… but what I see… is not promising.
FROM docker.io/blackarchlinux/blackarch:latest
# Upgrade system
RUN pacman -Syu --noconfirm
# Install sn1per from official repository
RUN pacman -Sy sn1per --noconfirm
CMD ["sn1per"]
The two pacman
commands are redundant. You only need to run pacman -Syu sn1per --noconfirm
once. This also goes against docker best practice, as it creates two layers where only one would be necessary. In addition to that, best practice also includes deleting cache files, which isn’t done here. The final docker image is probably significantly larger than it needs to be.
Their kali image has similar issues:
RUN set -x \
&& apt -yqq update \
&& apt -yqq full-upgrade \
&& apt clean
RUN apt install --yes metasploit-framework
https://www.docker.com/blog/intro-guide-to-dockerfile-best-practices/
It’s still building right now. I might edit this post with more info if it’s worth it. I really just want a command-line vulnerability scanner, and sn1per seems to offer that with greenbone/openvas as a backend.
I could modify the dockerfiles with something better, but I don’t know if I’m legally allowed to do so outside of their repo, and I don’t feel comfortable contributing to a repo that’s not FOSS.
I’m using eternity, which hasn’t received any updates, on my phone, and the default lemmy web interface on my computer.
Maybe I need to try some other options.
This is just straight wrong. iMessage on android has worked by connecting to a remote Mac, which then connects to imessage. The protocol is locked to their hardware.
And, even if there was a true open source reimplimplementation of iMessage, that would say nothing about the security of Apple’s proprietary implementation of the iMessage end to end encryption.
Because some of us have fat fingers and accidentally downvote when we scroll on mobile.
One of the things I liked about reddit was that, since it saved downvoted posts, I could go through the list every once in a while and undownvote the accidents.
Can’t do that here though, and I sometimes notice posts or comments I’ve accidentally downvoted.
Anyway, people shouldn’t care so much, we don’t have a karma system or the like here anyways, so why does it matter?
I can’t find the source code for this extension
Have you tried running a vpn to your phone while tethering?
https://moonpiedumplings.github.io/guides/unrestricted-tethering/
I experimented with it a little bit, but it didn’t go anywhere when I discovered my phone already proxies/NATs all my traffic.
I use this too, and it should be noted that this does not require wireguard or any VPN solution. Rathole can be served publicly, allowing a machine behind a NAT or firewall to connect.
No, it is lock in. If apple allowed for multiple app stores other than their own, then users could pay for an app on one app store, and then not have to pay again on another, potentially even on non-apple devices.
I encountered this when I first purchased minecraft bedrock edition on the amazon kindle. Rather than repurchasing it on the google play store when on a non-amazon, I simply tracked down the Amazon app store for non-amazon devices, and redownloaded it from there. No lock in to Amazon or other android devices, both ways.
Now, the Apple app store would still probably not work on androids… but now they would actually have to compete for users on the app store, by offering something potentially better than transferable purchases across ecosystems.
I suspect the upcoming Epic store for iOS and android may be like that… pay for a game/app on one OS, get it available for all platforms where you have the Epic store. But the only reason the Epic store is even coming to iOS is because Apple has been forced to open up their ecosystem.
LXD/Incus. It’s truly free/open
Please stop saying this about lxd. You know it isn’t true, ever since they started requiring a CLA.
LXD is literally less free than proxmox, looking at those terms, since Canonical isn’t required to open source any custom lxd versions they host.
Also, I’ve literally brought this up to you before, and you acknowledged it. But you continue to spread this despite the fact that you should know better.
Anyway, Incus currently isn’t packaged in debian bookworm, only trixie.
The version of lxd debian packages is before the license change so that’s still free. But for people on other distros, it’s better to clarify that incus is the truly FOSS option.
Edge WebView2
I’m like 90% sure this requires edge to be installed, even though the EU mandated that they make edge uninstallable. So that might be their game here.
Dockers manipulation of nftables is pretty well defined in their documentation
Documentation people don’t read. People expect, that, like most other services, docker binds to ports/addresses behind the firewall. Literally no other container runtime/engine does this, including, notably, podman.
As to the usage of the docker socket that is widely advised against unless you really know what you’re doing.
Too bad people don’t read that advice. They just deploy the webtop docker compose, without understanding what any of it is. I like (hate?) linuxserver’s webtop, because it’s an example of the two of the worst footguns in docker in one
To include the rest of my comment that I linked to:
Do any of those poor saps on zoomeye expect that I can pwn them by literally opening a webpage?
No. They expect their firewall to protect them by not allowing remote traffic to those ports. You can argue semantics all you want, but not informing people of this gives them another footgun to shoot themselves with. Hence, docker “bypasses” the firewall.
On the other hand, podman respects your firewall rules. Yes, you have to edit the rules yourself. But that’s better than a footgun. The literal point of a firewall is to ensure that any services you accidentally have running aren’t exposed to the internet, and docker throws that out the window.
You originally stated:
I think from the dev’s point of view (not that it is right or wrong), this is intended behavior simply because if docker didn’t do this, they would get 1,000 issues opened per day of people saying containers don’t work when they forgot to add a firewall rules for a new container.
And I’m trying to say that even if that was true, it would still be better than a footgun where people expose stuff that’s not supposed to be exposed.
But that isn’t the case for podman. A quick look through the github issues for podman, and I don’t see it inundated with newbies asking “how to expose services?” because they assume the firewall port needs to be opened, probably. Instead, there are bug reports in the opposite direction, like this one, where services are being exposed despite the firewall being up.
(I don’t have anything against you, I just really hate the way docker does things.)
Probably not an issue, but you should check. If the port opened is something like 127.0.0.1:portnumber
, then it’s only bound to localhost, and only that local machine can access it. If no address is specified, then anyone with access to the server can access that service.
An easy way to see containers running is: docker ps
, where you can look at forwarded ports.
Alternatively, you can use the nmap
tool to scan your own server for exposed ports. nmap -A serverip
does the slowest, but most indepth scan.
Yes it is a security risk, but if you don’t have all ports forwarded, someone would still have to breach your internal network IIRC, so you would have many many more problems than docker.
I think from the dev’s point of view (not that it is right or wrong), this is intended behavior simply because if docker didn’t do this, they would get 1,000 issues opened per day of people saying containers don’t work when they forgot to add a firewall rules for a new container.
My problem with this, is that when running a public facing server, this ends up with people exposing containers that really, really shouldn’t be exposed.
Excerpt from another comment of mine:
It’s only docker where you have to deal with something like this:
---
services:
webtop:
image: lscr.io/linuxserver/webtop:latest
container_name: webtop
security_opt:
- seccomp:unconfined #optional
environment:
- PUID=1000
- PGID=1000
- TZ=Etc/UTC
- SUBFOLDER=/ #optional
- TITLE=Webtop #optional
volumes:
- /path/to/data:/config
- /var/run/docker.sock:/var/run/docker.sock #optional
ports:
- 3000:3000
- 3001:3001
restart: unless-stopped
Originally from here, edited for brevity.
Resulting in exposed services. Feel free to look at shodan or zoomeye, internet connected search engines, for exposed versions of this service. This service is highly dangerous to expose, as it gives people an in to your system via the docker socket.
If you need public access:
https://github.com/anderspitman/awesome-tunneling
From this list, I use rathole. One rathole container runs on my vps, and another runs on my home server, and it exposes my reverse proxy (caddy), to the public.
Provision Management Software
Openstack skyline/horizon
Compute
Openstack nova
And so on. Openstack is also many, many components, that can be pieced together for your own cloud computing platform.
Although it won’t have the sheer number of services AWS has, many of them are redundant.
The core services I expect to see done first: compute, networking, storage (+ image storage), and a web UI/API
Next: S3 storage, Kubernetes as a service, and then either Databases as a service or containers as a service.
But you are right, many of the services that AWS offers are highly specialized (robotics, space communication), and people get locked in, and I don’t really expect to see those.
AWS is software. Just not something you can self host.
There already exist alternatives to AWS, like localstack, a local AWS for testing purposes, or the more mature openstack, which is designed for essentially running your own AWS at scale.
I just use termux + the simple http server built into python