Nobody likes ads, obviously. But what came as a painful surprise for me is that they don't just annoy people who see them – but also those who try to monetise their projects. It's not just a matter of putting a few tags on your page and waiting for a big fat wire transfer.
The process is exhausting and I feel the need to rant about it, as well as maybe warn people who might be embarking on a similar journey… so here we go.
In a perfect world, if I create a popular website, I could easily find someone who wants to use that advertising space, they give me money, I add their banner, everyone is happy. The ad is simply relevant to the page content, not the personal info about any particular user. In practice, though, unless you're a huge company with enough resources, finding advertisers and negotiating deals is a tedious task that a random programmer or creator just isn't equipped to do.
That's why ad networks exist – they facilitate the process, act as proxies between publishers and advertisers, and they take their cut. Cool concept, it really is. The problem, though, is the capitalist approach of squeezing the last bits of value from the process, at the cost of user experience, privacy and simplicity.
When I was researching ad networks last year, I only found one that was plain and simple: EthicalAds. It would've been perfect for my needs, if it wasn't for the fact that it's only applicable for websites that target software developers. I run a few of those, but they're too tiny to join the network, while the popular project that me and the rest of the collective wanted to use it for, Pronouns.Page, is for a totally different audience.
So it was more a choice of lesser evil than finding an ad network that we'd be totally comfortable with. Personally, I wanted to avoid dealing with Google, but it looks like most networks are strongly relying on AdSense anyway. We ended up trying three different networks so far, and today I've sent the third one a termination notice after having been in talks with the fourth network to switch to them. Let's hope that the fourth time will be the charm 😅
I don't really wanna name and shame any specific companies, cause it seems like the issues are there all over the industry, plus some of our problems might be stemming from our own inexperience in the advertisement world (although that shows how inaccessible the process is to an average publisher).
One inherent problem with ad networks is that it's practically impossible to fully test how they will look on your website. I can put seven empty placeholders on the page, as instructed by an expert from the network, and then test it and see that three of them got filled out with ads – while you load the same page and see all seven. I might be shown ads of food delivery companies – while you get shown some disgusting foot fungus medication ad. Some networks let you pick the categories that will and will not be shown, and even review post-factum which specific ads have been shown to users and block those advertisers for the future – but others give you zero flexibility in that area. In general, algorithms, mathematical models, AI and real-time bidding have more say than humans.
But that's something that's, admittedly, hard to avoid in the existing ecosystem. Other issues were more avoidable. For example, when we first tried one network, it utterly broke our website. They offered a super simple setup – just give them access to your Cloudflare and they'll proxy everything, not only adding the necessary ad scripts, but also optimising the traffic and speeding up your page. Sounds cool, didn't work. Well, it probably does for most of their publishers, but in case of our PWA and some DNS entries their scripts were not expecting, we ended up with no ads and no ability to log in. Not fun.
You'd think that adding a script that's loaded after the rest of the page and should limit its impact for a bunch of clearly defined placeholder shouldn't really affect the performance of the page much. However, one network's scripts were somehow affecting us so much that we ended up switching to another mainly for that reason. Another broke an essential feature of the page in a way that was troublesome to debug.
Update: After installing Sentry we've found out just how much JS errors ad scripts produced, geeeez…
All the networks looked at our traffic numbers and predicted monthly revenue in the lower five digits area – the kind of money that we could set up a proper foundation with, run educational campaigns, spread our mission, employ people, make the project a full-time job, not just a side project of passion created in spare time. But in practice it was not just lower, but an order of magnitude lower. Still not bad, enough to cover the server, domains, cloud and other maintenance costs, do occasional offline stuff like printing our zine, creating pins, stickers and flyers and handing them out at Herts Pride and Toruń Pride, and distributing the rest among the contributors in a form of volunteer allowance. Basically, it's not some wild, life-changing money, but it keeps the project running while also rewarding contributors for our time and effort, and helping us pay our bills. So we're really grateful to be in that position.
What's really bugging us, though, is how volatile and unpredictable the whole thing is. Of course we're expecting ups and downs, but sometimes it can just randomly drop by half in the span of two months, with no explanation. One that we did get, I'm not kidding, is that after we've spent a lot of time optimising the setup with A/B testing, our ad units performed so well that the system counts the traffic as suspicious and just doesn't pay for it. Their proposed solutions (adding padding and labels to ads so that users are less likely to click on them accidentally) didn't help at all. And there's no one to complain about it, you just have to deal with it.
There are so many variables at play, it's super confusing and unclear. Our current ad network tripled (!) the number of ad impressions they were showing our users compared to the previous company, without it being proportionally reflected in the revenue or becoming more optimised for user experience with time.
One network wouldn't show us any reporting for the first month, we were completely in the dark. And once we finally got access to the dashboard, it was way less useful than what other networks provided (eg. even trying to figure out which ad unit is which was not easy, because the dashboard simply didn't show their unique identifiers that are used in the code) – which we couldn't have known until we're a month in.
The flexibility of the setup varies a lot between the networks. For example, one has a simple button to switch their consent box from “Accept / More options” to “Accept / Decline / More options”, while another took two weeks to fulfil our simple request “we're not going live until declining is just as simple as accepting”. And none had an option to show the consent banner to everyone, not just people who are in jurisdictions where it's required by law, unless we implemented a big chunk ourselves.
I'm sure we could make our setup better in one way or another. There might be a good balance somewhere, there's probably an ad network that's the best match for us that we haven't tried yet. But I don't want to be spending more time on making monetisation bearable than I spend working on the actual website. It's already a way more annoying and complicated project than I was expecting it to be…
We're currently onboarding with a fourth company, hopefully to be accomplished this week. They look very promising, so keep fingers crossed that they're actually what they seem to be and that this switch will be the last one 😉
We're also working on implementing an idea (which was also independently proposed by a user) to offer a subscription for a few bucks which would remove all ads for them and the visitors of their profile, add a “supporter” badge in their card, and support the project.
I really hope that works out, because dealing with the ads setup (even though personally I'm mainly helping with the technical side and am shielded from a lot of other bs) is really sucking the joy out of creating this passion project.
]]>A user suggested adding a timezone field to Pronouns.page. This website lets people, among other things, create a card with info about how they want to be referred – their pronouns, names, etc. But it also has some generic fields, like age or links, so the team was onboard with the idea of adding some more basic info: not just timezone, but while we're at it, why not also a location?
Well, adding a location is not as easy as it seems…
First thing that comes to mind when I think of adding a location field in a profile is my experience running #TeamLocked (NSFW). I thought I implemented it in a pretty neat way – but turns out I still couldn't avoid issues.
I wanted the data to be structured, not just a free text field – that way I could add some smaller or bigger features around it: from displaying a little emoji flag next to a country name, to allowing people to search users by country and province.
So I split it into three levels: country
, province
(which depending on context can mean a state, a province, a województwo, etc.) and city
. First two being a selection from predefined lists, while the last one a free text input – cause I'm not going to manage a database of all cities, municipalities and villages in the world 😅 But even for countries and provinces, I didn't want to manually manage that list either – those things change constantly, and I'd rather focus on actually running the page. So I delegated the issue to the United Nations. I wrote a script that fetches the list of countries from here and the list of provinces from here, puts it into a neat JSON file, and then gets used to generate the select fields.
I thought that would be simple and unproblematic… until someone messaged me, angry that my website calls his country “Taiwan, Province of China”. Which is not something I stand for, but I checked and indeed that's how UNECE describes the country code “TW”. It's some kind of a weird compromise between recognising Taiwan's independence and pleasing China. Let's give it a separate country code while still calling it a part of another country… Ugh…
I changed my script to rename that item on the fly, but then more of similar issues kept coming. I checked if Kosovo is on a list – nope, despite having an ISO country code (XK) and an emoji flag (🇽🇰), it isn't included on UNECE's list. So I added it. The list of British provinces didn't include London for some reason. North Macedonia's name didn't get updated for a while, and Czechia's still isn't. Even though I had an automated script and was delegating responsibility to an international authority – I still ended up needing to put in manual effort into it.
Despite unexpected issues, it ended up working really well in the end. But simply applying a similar approach (well, probably just reusing the very same script) wouldn't really fit Pronouns.page well.
First of all, unlike #TeamLocked, an adult dating website, Pronouns.page is family friendly, and a lot of its users are minors. I want to make sure that I don't create a feature that could inadvertedly cause harm. What if a 13-year-old queer kid sees a free text input field called “location” and without thinking much just puts in their full home address?
We'd also like to avoid unnecessary political conflicts. Don't get me wrong, we're quite a political team, but our mission is to tell enbies (and queers in general) that they're amazing, that they deserve respect and recognition of their identity, and that they have a right to shape their language to meet their needs; not to get tangled in endless discussions over which government has jurisdiction over which piece of land. Actually, many of us are anarchists, so we'd rather see those governments fall than start showing their flags as a location indicator and validating the notion that artificial political borders are the best way to describe where you are 😅
Then there's a question of localisation. In the database we'd of course save the country info as a simple country code, but when displaying it – on a page so heavily focused on language and localisation – we'd have to take into account wheter “DE” should get shown as “Germany”, “Deutschland”, “Duitsland”, “Niemcy”, … There's databases online that we could use for that, but it's adding another layer of complexity…
There is a way to describe one's location that's super simple and (mainly) independent from politics – just use latitude and longitude, right? Other than the prime meridian being an arbitrary choice, it literally just describes one's location on a globe using simple geometry.
There is one problem though, at least for our use case: it's way too accurate. We want to allow users to share some very basic info about themselves, to let others know whether they live nearby or across an ocean – and not to be their GPS 😅
What if we rounded it to the nearest degree, though? Or 5 degrees? That way we'd only know that someone lives within a rectangle of few hundred kilometers by few hundred kilometers, giving us a healthy dose of inaccuracy and therefore privacy.
Well, I made a little proof of concept of how selecting one's location would look like, and TBH it's not too nice or intuitive. It's just a map with some arbitrary rectangle following your mouse, it looks confusing when you live near an edge of such a rectangle, it would require some fancier projection than Mercator, it wouldn't be too easy to use on mobile, etc. etc.
All of those potential issues can be overcome, of course, but I'd rather settle for something easier, if possible.
Well, the answer has been there all along! We were going to implement timezones anyway, right? We can use that for location information. After all, time and space are very closely related!
The simplest way to approach storing one's timezone would be to save the offset, like UTC-5
. But offset ≠ timezone! My timezone is UTC+1 now, but in March it will switch to UTC+2 even if I don't move anywhere – thanks DST 🙄 It's more accurate to use IANA's timezones, in my case Europe/Amsterdam
– that way a library can just calculate the proper offset itself.
But as you can see, that format already includes some location information! Why don't we just use it? Here's how those timezones look on a map:
It looks exactly like what we need! It splits the Earth into chunks that look less arbitrary and clunky than some purely geometric lat/long rectangles. Chunks that are big enough not to give away too much of someone's location, but small enough to give a pretty good understanding of how far away from you someone is. Sure, in many cases those splits follow country borders, but at least the assocciated labels focus mostly on cities and geographical names rather than political ones.
So, let's get to actually implementing the timezone field! It ended up being way simpler than I imagined. Here's how the form looks:
Turns out we can just use a built-in JavaScript feature to list all the IANA timezone codes:
this.timezones = Intl.supportedValuesOf('timeZone');
I'm already using Luxon in the project, so let's leverage its timezone features to add the “Detect automatically” button:
this.timezone = DateTime.local().zone.name;
Yup, that's it. Well, setting aside all the boring stuff, like migrations, server-side handling, autocomplete component, etc. – but the timezone part itself was incredibly easy!
I also added switches to let users choose whether their continent/ocean and location field should be explicitly displayed on their card (the full timezone code needs to be published by the API anyway, in order to correctly calculate the offset, but we can decide whether to show it in an easily accessible way).
And here's how it shows up in the profile:
The clock is of course dynamic. I used Luxon's built-in localisation to be able to show for example “1:35 PM” on the English version while Polish says “13:35”. I also had to remember to include weekday, so that it's clearer if someone is a day ahead or behind you. One nice extra touch is that since FontAwesome has multiple “globe” icons, each focusing on a different continent, I could even make the icon dynamic 😉
Different use cases might require different solutions, but if yours is similar to ours – relatively low accuracy by design, hard to abuse, easy to localise, etc. – keep in mind that timezones could be super helpful.
IANA timezone encodes so much information: if you know it, you know both approximately where someone is and also what time is it there. Pretty cool, huh?
Turns out there are still problems with the IANA timezones database. Kind of expectedly, huh? I knew life can't be too easy, there's definitely gonna be controversies around city names, I'm just excpecting that there would be fewer than with countries.
So the first one we found is this: for the capital of Ukraine IANA uses… the Russian spelling 🤦 (Kiev). We've replaced it with the more appropriate one: Kyiv.
When it comes to DevOps, I'm just the “dev”. I write code, but I'd rather have someone else worry about making sure it keeps running as intended. I manage my personal VPS, I manage some servers at work, but I wouldn't call myself an expert in that area at all. So I'm super proud of myself and how well it went when I migrated a big project to a new machine 😊 The downtime was just 15 minutes! Here's the story, if you're interested.
I bought the server on Wednesday and started setting it up, completely independently of the old machine. I created a setup that from the outside was indistinguishable from the old one, except for using an older database backup. Until Saturday morning they were running simultaneously – the DNS were pointing to the old IP, but my local /etc/hosts
to the new one.
I picked Saturday morning, because mornings are when our traffic is lowest, and that day I'm free and can focus on the migration, even if something goes wrong and takes more time than expected.
An important part of the plan was taking notes – every important command I had run, I documented for myself. In case anything goes wrong and I have to start over, or in case I wanted to later migrate my other projects to Hetzner too (and I do), I'd have a recepe basically ready.
The old configs were… meh. Long, repetitive and messy. Just switching from Apache to nginx simplified them massively, but I went further and extracted common parts for all domains and subdomains to make them reusable. Setting up everything for a new language version used to be a half-an-hour-or-so long process – now it takes me a few minutes.
There's two things that were really annoying for me to figure out: analytics and monitoring. Those things are expensive when your traffic is as big as ours. Corporations with such traffic can afford it easily, but we're not a business, we have no considerable income other than donations and an arc.io widget. We need to self-host those things, then.
For traffic analytics, I've recently switched from Matomo to Plausible. As much as Matomo is better than Google Analytics from the privicy perspective (and it's a lot better), it's still really heavy and has way more features than I'll probably ever need. I needed to pay 11€/m extra for a separate database server just for Matomo to store its logs. Plausible, on the other hand, is exactly what I need. So neat!
For monitoring… We used to have one, hosted on AWS Lambda, but it kept causing trouble that I had no time to fix, and in the end I disabled it. Monitoring is quite hard, because it needs to run on a dedicated and super reliable infrastructure – we can't monitor ourselves after all; if the server is down, so is the monitor that's supposed to let us know. I found out a really cool tool that keeps blowing my mind with its ingenuity: Upptime. It runs entirely on GitHub pages and actions, totally for free (as long as you keep it public).
We have multiple node servers running for different language versions. Each needs to run at some port and then nginx needs to be configured as a reverse proxy to that specific port. Before it was really tediuos, I'd just enumerate the ports starting at 3001 and for each new version I'd first need to sift through configs to find what the current max value was. I kept looking up which port was related to which domain. Annoying shit.
In the new setup, I wanted to have some single source of truth for the domain-port pairs, but it turned out to be near impossible (at least for my skills). Nginx is strict about its configs being static. I couldn't find an easy way to pass data about those ports to it.
Ultimately, I settled for a compromise system: I unambiguously map each version to a port by convering each letter of its ISO code to its index in the English alphabet. For example for Japanese, ja
, the port would be 31001
because “j” is the 10th letter of the alphabet, and “a” is the 1st. Not a perfect system, but it's gonna simplify my flow a lot.
I wanted to switch from supervisor to pm2 for its nicer interface and cool features, but for some reason it kept dropping a significant percentage of requests at (seemingly) random. I couldn't figure out the root cause, so I gave up. That switch wasn't worth that much of my time, I reverted back to supervisor.
A huge issue was that… no emails were getting sent from the new server. Every SMTP request would just time out. I tried everything I could think of, I figured I must've misconfigured something. I asked the team for help, but we couldn't find a root cause. But then I got this random hypothesis – what if Hetzner is blocking port 465 regardless of our firewall settings? A quick seach – turns out they do! Ugh… I get their reasoning, seems like a perfectly reasonable approach. Switching ports worked. I just wish they somehow gave notice more visibly, not just an unexplained timeout, so that I wouldn't waste so much time figuring it out.
I struggled with moving Plausible too… It runs in Docker containers – but my knowlege of Docker is pretty limited. I spent way too much time trying to pg_dump
and pg_restore
my way from one container to host to another host via scp
to another container, only to finally succeed and then… realise that the Postgres database is just half of the story. The main chunk of data resides in ClickHouse. Instead of struggling with the whole thing again, I took a different approach: backing up and restoring entire Docker volumes. It worked like a charm! Here are the commands, if you're interested:
# old host
docker run -v plausible_db-data:/volume --rm loomchild/volume-backup backup - > ./db-data.tar.bz2
docker run -v plausible_event-data:/volume --rm loomchild/volume-backup backup - > ./event-data.tar.bz2
scp ./db-data.tar.bz2 pp:/home/admin/www/stats.pronouns.page
scp ./event-data.tar.bz2 pp:/home/admin/www/stats.pronouns.page
# new host
cat ./db-data.tar.bz2 | sudo docker run -i -v statspronounspage_db-data:/volume --rm loomchild/volume-backup restore -f -
cat ./event-data.tar.bz2 | sudo docker run -i -v statspronounspage_event-data:/volume --rm loomchild/volume-backup restore -f -
When everything was ready, on Friday evening, I announced the upcoming maintanance and waited. I had all the commands ready in a notepad.
When the time came, I just… stopped both servers, moved the database, moved plausible volumes, started the new server, and then updated the DNS entries. Simple as that.
It was tons of work over a couple of evenings, but good preparation made wonders: it only took a quarter to actually switch. Considering how ops is not my forte at all, I'm so proud of having accomplished such a smooth transition 🥰
Among people who create websites or apps there's an understanding that UX, user experience, is massively important. We know that most users either don't have the technical knowledge to use software that isn't intuitive, or they simply don't have time to be bothered to get to know an app that isn't easy to use (and they have many alternatives to switch to).
So I'd think that ease of use of one's products is a common concern among companies of all industries, right? Well, I then moved to a new place and had to assemble a lot of furniture… What an absulute UX nightmare it was!
I'll only focus on one – most terrible – example: the wardrobe. Cause the amount of awful compared to how easily it could've been fixed is just astonishing there.
Our wardrobe came in 9 huuuge cardboard boxes. It took us just just about two evenings to actually assemble it – but like a week overall, if we count the time of being too overwhelmed to start, and figuring out what to do and how to even start.
I know, I know, check the manual, right?
Okay, but where is the manual, huh? Seriously, where is it? We've received nine huge boxes, and before we could even start wrapping our heads around the complexity of the task, we had to open and search through (so glad we had space for all that shuffling) seven of them. Yup, the most logical thing to do as the manufacturer is to put the manual in the box labeled “7/9”, apparently.
We had already been starting to lose faith, thinking we need to contact the manufacturer now, wait until they send us the missing manual, worry about what else they had messed up… Nah, they just made us go searching through seven boxes of parts, just because.
Just put the manual in the “1/9” box. That costs you nothing. Or even better, put it outside of box “1/9”, in the plastic wrapper where you put our address label and an invoice. I know, crazy customer demanding impossible accommodations!
But now that we have the manual, there's another challenge. We have no idea, which part is which. Some are easy to distinguish, but others are very similar to each other. The first evening of “assembly” was just us going through the manual and through all the items from all the boxes, figuring out which part is the correct size and has the correct holes on the correct side, and then putting a bit of tape on that part on which we wrote down the element number, as per manual.
Can you imagine the manufacturer just… doing it for us? Or even, I don't know, putting the info in the manual about which part is in which box? It's literally a bit of ink and a few bits of tape.
Imagine we could just take the manual from the plastic wrapping glued to one box and read things like “take part 5 from box 1/9 and screw it together with part 12 from box 4/9”? Unreal, right?
That's just the most important points I remember, among other things that could be improved. But I think it already makes the point. “Make the manual easy to find” and “let the consumer know which part is which” shouldn't really be hard things to figure out, right?
But apparently it's easier to just sell off a piece of furniture, enjoy the money you cashed in, and not give a fuck what the customer is gonna go through next.
After all, they probably won't need a new wardrobe any time soon, so why give a fuck?
]]>Four years ago I backed Font Awesome 5 on Kickstarter, and in return I received a license to use it and to access the pro features. The license might be perpetual, but the pro features, sadly, are not 😢
If you don't subscribe to a Pro plan, you won't be able to install Font Awesome Pro using npm or yarn.
That's the one feature I need! And on August 1st it will be gone! My dev setup, my deployment setup, of multiple projects, everything depends on fetching Font Awesome from the npm registry.
Luckily, there's a simple way around it 😉
I'm allowed by the license to download the project and create backups, but obviously not to share it with others – so it's all done in a private repository.
First, I fetched the latest version of the project:
yarn add @fortawesome/fontawesome-pro
Then, I just went to node_modules/@fortawesome/fontawesome-pro
, initialised a fresh git repository there and pushed it to gitlab.com/Avris/FontAwesomePro.
Now, whenever I want to use FontAwesome 5 in a project, I just go:
yarn add git+ssh://git@gitlab.com:Avris/FontAwesomePro.git
And that's it!
I was surprised that it's so simple. Of course my repository won't get automatically updated with the project, but since version 5 won't be developed anymore, it doesn't matter at all. And it works like a charm!
]]>I strive to optimise this blog's performance as well as I can. But chasing a goal of a lightweight website while keeping it pretty prevented me from realising the obvious truth that the most performant assets are… no assets.
So, inspired by Sijmen J. Mulder's directory of text-only websites, I decided to create a bare version of my blog.
Here's how it went:
Many pages of this blog are already available in multiple formats, eg. the Atom feed of all entries or a JSON version of the list of my projects, so adding a new one, .lite
, was relatively easy.
The main part was copy-pasting all <page-type>.html.twig
files to <page-type>.lite.twig
and removing all the bullshit from them. All the HTML nodes that exist solely to make the page look nicer: wrappers, containers, columns, etc. All the icons, fonts, logos, twemoji, everything that's not essential.
There's no JavaScript loaded whatsoever.
There's no external stylesheets loaded, just some minimal styling in <style>
tag and few inline style
attributes.
Images inside articles had to stay, becuase in many cases they are a very important part of the content, and not just decoration. But I made them way smaller – max 240px in width – and linked to open a bigger version, if necessary. I also replaced the JS-based lazy loading with HTML5 native loading="lazy"
.
However, the browser support for this feature is still not perfect – and if it doesn't work on someone's machine, they'll have to download 8 MB (!) of tiny images when visiting /blog.lite
… So I implemented a simple pagination, all inside a Twig template, based on ?after=<timestamp>
parameter in the query string.
Anyways… It's time for the results! 🥁
Normally, opening the the homepage makes 40 requests and loads 522 kB of resources. In the lite version, it's 2 requests (HTML and favicon) that weight just 15.8 kB. Just 3% of the original weight!
The heaviest page, /blog
makes 37 requests of 3.5 MB. In the lite version, it's 10 requests of 198 kB. Just 6% of the original weight.
Opening a random article made 34 requests of 493 kB normally, and just 3 requests of 31.2 kB in the lite version. Also just 6% of the original weight.
The differences are huuuuuuuge!
Summing up: if you don't mind websites looking ascetic in return for loading quicker, working smoother, and sparing data plan and battery, definitely check out Sijmen's directory.
And on my blog you can just head to /lite or add .lite
at the end of any subpage. 😉
Yet another one of my projects, Naked Adventure, grew too outdated to support it. I had to rewrite it from scratch.
I took the opportunity to redesign it as well. (screenshots before & after at the bottom)
Here's an overview of what I did:
I've already complained about my mySN laptop here and here, but now it turns out, there's a part three to be written...
Long story short, my main complaint (among many others) was this:
The charger would just disconnect randomly. Then connect back after a couple of minutes. Then disconnect again. One day it was better, the other it was worse. I almost didn’t use it on battery anymore, for fear I would drain it and then not be able to charge it back.
So basically: now it started happening again 🙄
I'm planning to get a new laptop anyway, a decent one this time, but it's super annoying that it had to happen now, when I still have to wait a few weeks until Apple releases their new MacBooks Pro.
I do need to vent, though... So here we go:
Hackers know your password. I'm like 99% sure they do. Just go to ';--have i been pwned? and enter your email(s). See? Your password is as good as public.
We all hate passwords, don't we? Trying to keep them easy to remember, but also hard to break, while also complying with stupid arbitrary rules, while having corporate forcing you to change it regularly... They're a pain in the ass.
They aren't even that safe. How can you be sure that the administrator of a website that you trusted with your password even hashes it? How do you know they salt it? How do you know they don't use an outdated hashing algorithm? How can you be sure they won't have a data breach, their database leaked, and your password recovered with supercomputers?
Damn, I'm an administrator myself, and I still can't be sure I do everything right. I got a panic attack last week when a user reported that his password was stolen and used for blackmail ( Recognising red flags in blackmail emails) – in this case it was a reused password that leaked somewhere else, but still... Even though I'm storing people's passwords in the best way I can, I would feel so much better, if I just didn't have to store them.
How about we just stop using passwords?
If you log in with Facebook, Twitter, Google, Apple or whatever, you make me a happier developer. Yes, your social media account is still protected by a password, but at least I don't know it. If anything happens to your social media account, it's gonna be a problem of a huge corporation with well-funded infosec department – not a random developer who makes websites for fun and does their best keeping them safe.
In my first job I saw in the logs that many users use the “remind password” feature not as a recovery option, but as their main login method. Today, their quirk is becoming an increasingly popular security trend, actually. If you go, for example, to Whereby, they'll only ask you for your email, not a password. Every time you try to log in, they'll just send you and email with a one-time code.
I implemented a similar approach in my new project, Avris Booster: Quick start of new projects (under active development). Now I don't need a separate “register”, “log in” and “remind password” forms – I just have one. Now I don't keep any passwords – just some temporary codes that can't be reused on other websites.
And users don't have to worry about remembering, storing or losing their passwords.
Whatever authentication method you use (or are forced to use), it's always better to have a second one. If a website offers MFA, it's smart set it up (I can strongly recommend the Authy app for it).
This way, even if your password gets stolen, the hackers will still need more (your phone) to access your account.
There's also such a thing as Hardware Security Modules. Some websites offer login with PGP keys. And there's probably a lot more options, but no time to dive into them right now.
Let's face it, passwords are still inevitable. Until webmasters stop being so password-centric, we have to keep using passwords, if we want to keep using their products.
The best we can do in this situation, is to make sure that all of our passwords are strong and unique (so that a hacker who found out our Spotify password can't use it to log in to our bank or email). Of course, nobody can remember tens or hundreds of strong, unique password. That's why we have password managers. I can strongly recommend KeePass. Or even saving the passwords in the browser.
Seriously, anything is better than using the same password for some shady forum as you use for your online banking.
]]>Yet another one of my projects, Avi • Simple placeholder avatars, grew too outdated to support it. I had to rewrite it from scratch.
I took the opportunity to redesign it as well.
It's quite a simple project, so it only took me a couple of hours. Here's what I did:
The most important maths lesson in my life wasn't actually that hard. It wasn't even really about maths.
Back in the primary school we had the following problem to solve:
There's a guy who likes his coffee... strange. He gets a cup of black coffee, drinks half of it, then fills the empty half of the cup with milk. Drinks half of it and fills back up with milk. His coffee gets whiter and whiter that way, until after the eighth time he just drinks it whole. How much coffee and how much milk did he drink overall?
We've spent the entire lesson calculating this shit. An entire class of primary school kids trying not to make a mistake in their fractions. And not just regular kids – it was an extracurricular for pupils gifted at maths.
At first he drank ½ a cup of coffee. Then he drank a half of this half-coffee half-milk mixture. So now it's ½ + ½⋅½ = ¾ cup of coffee, and ½⋅½ = ¼ cup of milk. Then he drank a half of the ¼-coffee ¼-milk mixture, or something... Then five more times...
Our young brains, that had just learned about fractions in the first place, were now melting, basically trying to calculate the 8th power of some messy fractions. Everyone got a different result. We were tired, disappointed and ready to give up.
So then the teacher gave us the answer. It's one cup of coffee and 4 cups of milk. Simple as that. Not 507/512 cups of coffee and 1034/256 cups of milk, or whatever we came up with.
Why were we so wrong and tired, while she managed to calculate the correct result in her head in a second?
Because we were calculating how much he drank (exactly what the question was), while she calculated... how much he poured into the cup. He drank everything he had poured, so why not?
He poured in a full cup of coffee and 8× half of cup of milk, and then drank it in an overcomplicated way. That's it.
So here's my math lesson: if you're stuck solving a problem (any problem, not just primary school maths problems), but it gets increasingly harder and harder to solve – don't keep plowing like an idiot, and stop being so sure you're gonna solve it if only you just work hard enough. Instead, take a step back, take a deep breath and try to find a totally different way around it.
Sometimes it's way better to work smart than hard.
The whole class (including the teacher who kept pretending to help us all along) has spent an entire hour calculating boring and complex stuff, while we could have just spent a minute to think about it first.
We didn't waste that hour. It might have been one of the most valuable hours of my early life.
]]>I've seen some begginer programmers asking themselves: why do I even need constants? Variables I get, they're super important, but why have an extra thing that's like a variable, but worse? It can't even change! And if I know that const NUMBER_OF_COLUMNS = 3
, why can't I just write 3
?
Well, first of all, it's not always just a simple 3
. Sometimes, for example, a constant is 3.141592653589793...
. In this case writing Math::PI
is not only shorter, prettier, and more meaningful, but usually more accurate (how many digits of π can you write down? your standard library knows a lot of them) and not prone to typos (misspelled constant name just won't compile...).
let circ1 = 2 * Math.PI * radius;
let circ2 = 2 * 3.141582653589793 * radius;
The first line better resembles the well-known mathematical formula, and it doesn't require googling (or remembering) the actual value. The second one? Apart from being ugly, it's also wrong. Look closely.
Creating a constant with some value gives this value a label. Gives it an extra meaning. It's not just a three anymore. Now it's the NUMBER_OF_COLUMNS
(that just happens to be three).
Now it's a three that's different from other threes.
Let's say you have 3
columns of data updated every 3
minutes. Your code is full of threes. After coming back to your code after a break, you don't know anymore what those threes mean, they're just some numbers. If one day you have to change the layout to two columns, how do you know which threes to replace with twos, and which ones to keep?
Do you have to look through your entire codebase, read it and understand it?
Or maybe instead you have NUMBER_OF_COLUMNS
columns of date updated every UPDATE_INTERVAL
minutes?
const NUMBER_OF_COLUMNS = 3;
const UPDATE_INTERVAL = 3;
Easy to understand wherever they are used, and super easy to change in just a single place.
Constants are only constant during the execution. During development – the actually make changes easier.
And yes, you can do “less” with them, but that property actually adds value. It lets the programmers (and the compiler) know that it's not supposed to change. It prevents your app from being inconsistent because someone accidentally overwrote a variable.
So there you go: constants, although less flexible, are just as useful as variables.
]]>Here's a list of tools that I use for work and can fully recommend:
Bootstrap is awesome. But it’s also a lot. Modals? Popovers? Tooltips? Badges? Toasts? I don’t use any of that!
I already don’t include any of Bootstrap’s JavaScripts, but I should definitely clean up its CSS.
So instead of including
@import "~bootstrap"
I went to its source and copy-pasted the modules that it was loading over there, commenting out the ones I don’t need:
@import "~bootstrap/scss/functions";
@import "~bootstrap/scss/variables";
@import "~bootstrap/scss/mixins";
@import "~bootstrap/scss/root";
@import "~bootstrap/scss/reboot";
@import "~bootstrap/scss/type";
//@import "~bootstrap/scss/images";
@import "~bootstrap/scss/code";
@import "~bootstrap/scss/grid";
@import "~bootstrap/scss/tables";
//@import "~bootstrap/scss/forms";
@import "~bootstrap/scss/buttons";
//@import "~bootstrap/scss/transitions";
//@import "~bootstrap/scss/dropdown";
@import "~bootstrap/scss/button-group";
@import "~bootstrap/scss/input-group";
//@import "~bootstrap/scss/custom-forms";
//@import "~bootstrap/scss/nav";
//@import "~bootstrap/scss/navbar";
@import "~bootstrap/scss/card";
//@import "~bootstrap/scss/breadcrumb";
//@import "~bootstrap/scss/pagination";
//@import "~bootstrap/scss/badge";
//@import "~bootstrap/scss/jumbotron";
@import "~bootstrap/scss/alert";
//@import "~bootstrap/scss/progress";
//@import "~bootstrap/scss/media";
//@import "~bootstrap/scss/list-group";
//@import "~bootstrap/scss/close";
//@import "~bootstrap/scss/toasts";
//@import "~bootstrap/scss/modal";
//@import "~bootstrap/scss/tooltip";
//@import "~bootstrap/scss/popover";
//@import "~bootstrap/scss/carousel";
//@import "~bootstrap/scss/spinners";
@import "~bootstrap/scss/utilities";
@import "~bootstrap/scss/print";
The only issue is that I do use forms.scss
a bit. There is a search form in the header that has its <input>
field styled.
So I just inspected that element in Opera and pretty much just copy-pasted the three relevant selectors.
Bum! Starting off with a minified CSS file of 244 KB, now it’s down to 184 KB. A third of its weight is now gone.
But that’s just the first step.
The real big deal in terms of the website weight were the FontAwesome icons. There’s thousands of icons included, but I only use a dozen – so why do I make visitors download them all?
You can load FontAwesome in a number of different ways. One of them are SVG sprites. You can include the definitions of the icons in a form of SVG <symbols>
s, and whenever they’re used, just use a <use>
tag to reference it.
So a wrote a simple service (plus a Twig extension) that would do exactly that:
<use>
tags wherever an icon should be displayed,I just had to add some CSS to display the SVG icons the same way as webfonts in terms of size, position and using the color of the surrounding text:
.icon {
width: 1.1em;
height: 1.1em;
vertical-align: -.125em;
fill: currentColor;
}
Et voilà! The homepage of my blog went down from 765 KB at the beginning, to 704 KB after the first step, to 393 KB now!
Btw, I’ve put my little helper into a library, if you want to check it out.
There’s caveats though... SVG is heavy. The two .woff2
files I was using are 261 KB overall, while the corresponding two sprites are 2.1 MB. But if I filter their content to just the icons I actually use (which is way simpler to do in SVG than in webfonts), it goes down to just 44 KB! So if your website is using a looot of fonts, you’ll probably be better of generating a custom webfont.
There can also be issues related to warm caches possibly circumventing the Optimiser, with data that includes new icons being loaded dynamically in JS (which I don’t do), with the increased execution time (which doesn’t bother me since I use an HTTP cache), etc. Still, in many cases this trick can be very useful. Like mine.
Switching from webfonts to dynamically filtered SVG sprites not only removed the need for two requests .woff2
files, but also the need for the CSS that maps class names to font glyphs. My CSS file went down again, from 184 KB to just 96 KB.
So here I am: having spent not even a full evening on it, doing optimisations as simple as they can get, and ending up with the website trimmed of half of its fat.
Nice 😎
]]>It wasn’t really supposed for the New Year, but I’ve had plenty of free time on my hands during the holiday break, so here it is already: a brand new version of my blog 🥳
Where to start? I got increasingly annoyed by the old version. The design started seeming boring and blunt. The code started growing unstable. It was using the Micrus framework, which I stopped supporting a year ago. It was using Micrus Assetic – an outdated method of asset management, which highly depended on the server configuration and binaries installed. The deployments were “iffy”, to say the least. I was afraid of sending even small changes to the server, for fear something might blow up again. Some small things I didn’t even notice were wrong: like still warning about Google Analytics tracking, even though I migrated to Matomo ages ago.
I decided to keep it simple. To go with a “less is more” approach. I’ve removed soooo many features that just weren’t useful enough.
I used to have an admin panel: except I didn’t need any fancy editor, the main content was written in an (enhanced) Markdown anyway. So for this version, I’ve decided for a filesystem-based approach: I’ve ditched the database completely, I don’t need any ORM, I don’t need an admin panel. I just have a bunch of SUML files tracked by GIT. Using Esse CMS.
I used to have three language versions of the UI – but why? I don’t think anyone really cared about not seeing the names of the categories in Polish, when most of recent my posts were in English anyway. So now the posts are still available in the language(s) they were written in, but I’m no longer maintaining the multiple versions of the UI, or the features to filter content by language. What a relief!
I used to have a “random post” box. I used to have tag clouds. I used to have an embedded Twitter timeline. I used to have the latest posts on the “about me” page (really?). I used to have a link shortening feature (”avris.it/l/<whatever>”) that I’ve never used, not a single time. I used to have a separate controller and database fields to support redirects for legacy URLs from the even earlier version. I used to have database fixtures. I used to have a console command to export legacy comments to Disqus. I used to have tooltips and popovers. All gone now! 😍
I replaced the infinite scroll with... just loading the whole page at once 🤷 It’s not that big for today’s standards anyway – the heaviest list of posts is 113 KB of compressed HTML. Instead, I now lazy load images using the IntersectionObserver
API. Way simpler than infinite scroll or pagination – which aren’t really necessary with this amount of data.
Oh, and the website is now using a fast an sleek Symfony setup with an HTTP cache 🥰
Design-wise, I use the default Bootstrap theme, with only minor changes of variables and minimal additional styling. The main feature, obviously, are the tilted elements. That’s enough to add some personality to the website, without making it overcomplicated. Compared to the previous version, it’s way more contrasting, better tailored for wider screens and clearer.
Overall, 41090 (!) lines of code were removed:
There’s nothing else in the (programming) world I love more than removing useless code!
Deployments are now stable, simple, reproducible and revertible, thanks to Symfony, Webpack, lack of database (no db = no migrations), and most importantly my recent child: Avris Deployer.
So, to finish this post, let me just quickly show you the four versions I’ve been through so far:
My first blog was called “Silva Idearum” (Latin for “Forest of ideas”). It was anonymous, spirituality-oriented, Polish-speaking, Joomla-running and with a “too much” design.
After coming out and becoming independent from my parents, I was finally able to put my name on my posts. New domain and a redesign was also in place.
As announced in Brand new blog…
For quite a while my VPS was misconfigured – any HTTP requests it got but couldn’t assign to a vhost, it redirected to the main website, avris.it. I didn’t think it would be a big deal, until I recently found out that my post Ungoogling is indexed by Google under https://askara.avris.it/blog/ungoogling
This subdomain hadn’t existed for a long time already, my server doesn’t serve a certificate for it anymore, but it requires HSTS, so browsers end up showing users a scary error message.
I had to do something about it.
First of all, users have to see anything other than a security warning. I need a wildcard certificate.
Fortunately, Let’s Encrypt offers them now, and it’s totally free. I just followed an instruction to obtain one, and then configured Apache to serve the /www/default
directory with the *.avris.it
certificate for all requests that don’t fit to any vhost.
Once the users can see the website, I can show them things. Ideally, just a 404 with an information that it’s not a valid domain, and a suggestion where they might have wanted to go (same request string, but with the base domain). Easy.
Btw, I used Water.css, a ridiculously simple CSS framework – I just added two lines, no classes, and the page already looks way better!
But that doesn’t solve the root cause: bots are confused about which domains they should be using. They don’t care whether my certificate is working or not, they don’t understand the message I left there for the users.
They need a proper HTTP 301 -- Moved permanently
. So I had to add a simple recognition whether I’m serving a bot or a user, and adjust the response for each of them.
So, here’s what I ended up with:
DirectoryIndex index.php
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteCond %{REQUEST_URI}::$1 ^(/.+)/(.*)::\2$
RewriteRule ^(.*) - [E=BASE:%1]
RewriteCond %{ENV:REDIRECT_STATUS} ^$
RewriteRule ^index\.php(/(.*)|$) %{ENV:BASE}/$2 [R=301,L]
RewriteCond %{REQUEST_FILENAME} -f
RewriteRule .? - [L]
RewriteRule .? %{ENV:BASE}/index.php [L]
</IfModule>
<IfModule !mod_rewrite.c>
<IfModule mod_alias.c>
RedirectMatch 302 ^/$ /index.php/
</IfModule>
</IfModule>
<?php
function isBrowser($ua): bool
{
if (!$ua) {
return false;
}
$isProbablyBot = (bool) preg_match('#bot|crawler|baiduspider|80legs|ia_archiver|voyager|curl|wget|yahoo! slurp|mediapartners-google|facebookexternalhit|twitterbot|whatsapp|php|python#i', mb_strtolower($ua));
$isProbablyBrowser = (bool) preg_match('#mozilla|msie|gecko|firefox|edge|opera|safari|netscape|konqueror|android#i', mb_strtolower($ua));
return $isProbablyBrowser || !$isProbablyBot;
}
$url = 'https://avris.it' . $_SERVER['REQUEST_URI'];
if (!isBrowser($_SERVER['HTTP_USER_AGENT'] ?? null)) {
http_response_code(301);
header('Location: ' . $url);
die;
}
http_response_code(404);
echo <<<HTML
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>404 – Not found</title>
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="shortcut icon" href="https://avris.it/assetic/gfx/favicon.png" />
<link rel="stylesheet" href="https://cdn.jsdelivr.net/gh/kognise/water.css@latest/dist/light.min.css">
</head>
<body>
<h1>
404 – Not found
</h1>
<p>
This is not a valid subdomain.
</p>
<p>
Did you mean <a href="$url">$url</a> ?
</p>
<hr/>
<p>
<small>
You'll be redirected there in <span id="countdown">15</span> seconds anyway... 🤷
</small>
<p>
<script>
const \$el = document.getElementById('countdown');
seconds = 15;
setInterval(_ => {
if (seconds === 0) {
window.location.href = '$url';
return;
}
seconds--;
\$el.innerHTML = seconds;
}, 1000);
</script>
</body>
</html>
HTML;
]]>Platforma Obywatelska obiecała, że wprowadzą głosowanie elektroniczne. Ta partia nie jest znana z realizowania obietnic, więc nie boję się zbytnio, że ten okropny, okropny pomysł wejdzie dzięki nim w życie. Ale temat mnie poruszył, bo widzę, jak bardzo ludzie są zafascynowani taką opcją i jak bezkrytycznie ją popierają, myśląc, że skoro wszystko inne jest lepsze dzięki komputerom, to głosowanie też musi.
Otóż wcale nie musi.
Choć na pierwszy rzut oka może się tak wydawać, to wybory wcale nie są z punktu widzenia informatyki na podobnym “poziomie trudności” co na przykład płatności przez Internet. Ludzie myślą, że skoro mogą z poziomu przeglądarki bezpiecznie wysłać przelew na kilka tysięcy albo załatwić sprawę urzędową, to czemu miało by być tak trudno załatwić tam również i głosowanie?
W pierwszym przypadku głównym problemem do rozwiązania jest to, aby obie strony komunikacji (klient i bank, petent i urząd, itp.) miały pewność co do tożsamości rozmówcy, oraz to, żeby nikt postronny ich nie podsłuchał. Te problemy rozwiązaliśmy już dawno i wciąż udoskonalamy nasze rozwiązania.
Natomiast w przypadku głosowania dochodzi do tego kolejny element: gdy już upewnimy się, że naprawdę rozmawiamy z Janem Kowalskim, i że Jan Kowalski ma prawo do głosu, i że jeszcze swojego głosu nie oddał, musimy zapewnić mu pełną anonimowość. Nie możemy wiedzieć, jak Jan Kowalski zagłosował. Profil Zaufany zapewni nam pierwszą część, ale nie drugą. Tak samo wszystkie inne buzz words, które słyszałem od zwolenników pomysłu: rozpoznawanie twarzy, odcisków palca, a przede wszystkim ten mityczny blockhain.
Blockchain, moi drodzy, jest w gruncie rzeczy bardzo skomplikowaną i bardzo wolną bazą danych. Nie rozwiązuje problemów z elektronicznym głosowaniem. Przede wszystkim dlatego, że z definicji jest weryfikowalny.
A Konstytucja gwarantuje tajność wyborów. I bardzo dobrze!
Nie chodzi tu o to, że wstydzę się mojego głosu i wolałbym, żeby nikt o nim nie wiedział. Chodzi o integralność całego procesu.
Gdyby wybory tajne nie były, otworzyłoby to furtkę dla różnej maści matactw: kupowania głosów, zastraszania wyborców, etc. Nie można natomiast zmusić kogoś do zagłosowania w konkretny sposób, jeżeli nie ma się fizycznej możliwości sprawdzenia, jak ten ktoś zagłosował.
Problemem jest również to, że wybory elektroniczne są zdecydowanie łatwiejsze do sfałszowania. To, że tobie jest wygodniej zagłosować nie ruszając się z domu, znaczy też, że fałszerzowi wygodniej jest fałszować – on też nie musi się ruszać z domu.
Żeby sfałszować tradycyjne wybory, trzeba dużego nakładu środków. Żeby sfałszować wybory elektroniczne, w najgorszym wypadku wystarczy jedna luka w zabezpieczeniach jednego serwera, wykorzystana zdalnie z drugiego końca świata.
W tradycyjnym głosowaniu masz fizyczne lokale wyborcze, ludzi będących na miejscu, ręczących za poprawność głosowania i liczenia głosów, a przede wszystkim kontrolujących się nawzajem.
W głosowaniu elektronicznym natomiast masz system informatyczny, który dla wyborcy jest raptem czarną skrzynką kontrolowaną przez rząd. Czy serio byś mu ufał?
Wyobraź sobie że PiS otwiera “niezależną” centralę telefoniczną, w której “niezależni” telefoniści zbieraliby od ludzi głosy bez wychodzenia z domu. Uznałbyś to za wygodny sposób głosowania czy raczej za wygodny sposób sfałszowania wyniku?
Gdy dodać do tego komputery, wciąż działałoby to mniej więcej tak samo. Po prostu istniałaby opcja zweryfikowania twojej tożsamości, natomiast nie byłoby opcji, że ktoś podsłucha konwersację między tobą a centralą. Kwestia braku zaufania do centrali wciąż pozostaje nierozwiązana. Tak samo jak kwestia potencjalnego pistoletu przystawionego do twojej skroni, byś zaznaczył odpowiednie pole na ekranie.
Głosowania elektroniczne już miały miejsce w kilku krajach na świecie. Najgłośniejszym przykładem jest chyba Estonia, w której eksperyment ten okazał się być, surprise surprise, fiaskiem: Wikipedia, estoniaevoting.org
Nie wątpię, że kiedyś uda się informatykom wymyślić system, w którym integralność wyborów będzie równie (lub bardziej) bezpieczna co w przypadku tradycyjnych głosowań. Ale ten dzień jeszcze nie nastąpił. W 2019 roku nie mamy technologii by przeprowadzić wybory elektroniczne, które spełniałyby standardy demokratyczne i odpowiednie standardy bezpieczeństwa.
Co następne? Platforma obiecująca wszystkim “teleporter w każdym domu”?
Mam taki pomysł: może najpierw wynajdźmy tę technologię, a dopiero później obiecujmy wyborcom jej wdrożenie, hmm?
]]>They just logged me in without asking. WTF?!
Seems like they use a “Login with Facebook” button as a script that shares cookies with the facebook.com domain, basically giving them access to each other and letting a third-party website know my FB account before I use it to log in!
I’ve stopped using FB over two years ago, removed most of my personal data, and I only go there when I need to contact someone and have no other option.
I’ve only now realised that’s not nearly enough to protect my privacy. FB keeps lurking on you even on unrelated websites.
So I’ve logged out of Facebook. And removed all their cookies. And removed the damn Pinterest account.
And after I move some stuff out of Google ecosystem, I’m going to log out from it as well. They’re even worse at respecting user’s privacy 🤮
All the giants thinking you have a right to track me wherever I go online: fuck you all 🖕🖕🖕🖕🖕
]]>Did anyone receive a message recently that contained a video of me watching porn? 😆
Because apparently I was being blackmailed that all my contacts would receive it, if I don’t pay 202€ in BTC. Alas, I didn’t check the spam folder, so I’d missed the deadline a week ago 🤷
Despite so many red flags people seem to be buying this bullshit, apparently. Those criminals have received 4,5 BTC already (~14k€). Within A WEEK!
Please, educate yourself about cybersecurity, people... And don’t panic! 🙂
I just got a report of a scam attempt that did include a “proof”, so let's take a look at it:
Yes, the attacker knows your password. But they don't tell you to which service this password grants access to, which is pretty weird... If I were an asshole and had that information, I'd use it: at least to further prove to you that I know private shit about you, at worse to log in to that account, take it over and demand money for giving it back. And yet, they didn't. Suspicious.
But anyways... if you go to the ';--have i been pwned? website and enter your email, you'll see the list of data breaches over the years where your email was found.
This guy's email was in 18 of them. Some were connected to the same reused password. So here's how the attacker knew his password: he didn't hack his computer or gain access to his camera, he just found it (or bought it) on some shady website. The attacker probably doesn't even know what service was this password for or whether it's even still valid.
Good news about this attack: as of today, not a single cent was sent to that BTC address 🎉
So what's the takeaway?
hiVxV788S2dm8rK6P5qH
? Bitch, even I don't know that's my password 🤷Depending on one company with all of your data is pretty risky. Even if we ignore the obvious privacy concerns of when some corporation knows everything about you... Just imagine what would happen to you personally, if one day that corporation would just... disappear for whatever reason. Say, Google gets a huge fine from the European Commission for one of their monopolistic practices or shitting on their users’ privacy, and turns out they don’t recover from that. How screwed are you?
One day you lose your emails, photos, passwords, documents, notes, calendar, what else?
So, recently I decided to diversify my technical dependencies. Not to boycott Google completely, but to at least use it less.
If you think about it, there’s usually no need for the search engine to know who you are in order to serve you useful search results, right? Even for the purpose of making money on ads: if you’re looking for “barbecue”, they’ll show you adverts of grills, because that’s what you’re looking for right now, and not adverts of Cloud Storage, because they know from somewhere else that you might need it... Yet, Google still collects plenty of data about you when you search...
The switch to the privacy-oriented DuckDuckGo turned out to be surprisingly easy. I just changed the default search engine in Chrome and... and that’s it! DuckDuckGo offers most of the features that I was used to in Google, has similar interface, and most importantly it serves the search results that are just as relevant as those of Google.
Same goes for the image search: DuckDuckGo handles it perfectly, and even has way less annoying user interface than Google.
And when I’m looking for a nice photo to use in a project, I go to Unsplash. Everything there is high quality and totally free to use however you’d like 😍
Let’s not kid ourselves, all the modern browsers are basically the same. They might have this little feature less or this feature more, but I honestly can’t think of any strong reasons to like one over another. Some even share the same engines, just with a different UI. Even Edge is a good browser already. It doesn’t matter that much, which one you choose, and switching between them shouldn’t be a big issue. They can import all your settings from your previous browser.
Or not. That’s what I did: started with a clear browser, no history, no passwords, no saved forms. I wanted to do it anyway, so ungoogling my life was a good occasion to also restart my browser.
Update: I found out about Brave Browser – it focuses on privacy, has a built-in private window with TOR feature, and most importantly it blocks most ads and trackers while still staying fair towards the content creators by encouraging a new model of digital avertising based on BAT. Check it out, it's awesome!
I used to use the same password for everything. Then I got smarter and started using different versions of the same password. But it’s obviously not how you should treat your passwords to stay safe.
So now I’m using a unique, random, strong password for each service. And I don’t store it in Chrome (or any other browser) anymore. Instead, I put them in a password manager, KeePass. Opening it, finding the right password and copy-pasting it to the browser might be a bit less convenient than having the browser just remember it for you, but this way the only party that ever has access to my unhashed /unencrypted passwords is me.
I sync the KeePass file (encrypted) between devices via Google Drive, so that’s still in the queue to ungoogle. → via Cubbit.
Getting rid of Google Analytics required a bit more work, because I have 13 websites tracked that would all require new tracking codes, commits, deployments... But when my lovely husband finally published his literary blog, and I had to do all that anyway, we decided to give Matomo a go.
I just had to set it up on my server (it’s totally free, if you self-host it). Aaand I loved it. It’s hard for me to compare their features, since I only use the most basic stuff, but Matomo seems to have all I need (and more), with an interface that I like more than Google’s.
Update: Matomo is very heavy, slow and offers way more features than I ever needed. I found this gem, though, and it's amazing: Plausible.
I really like Inbox’s extra features, like grouping emails into trips, snoozing etc. I might have some doubts about leaving it, if it weren’t for the fact that Google is killing the project, so I won’t use it one way or another. I’m using an email in my own domain, while my @gmail.com address is mostly there collecting spam, so having to change my address won’t be a problem either.
I don’t want to set up my own IMAP/SMTP server, because I just don’t know enough about it to risk being classified as spam or not having 99.999% uptime.
What I do now is redirecting all the incoming emails to Gmail on the DNS level, and use Gmail as SMTP. I could do something similar with almost any other mailbox provider, right?
I guess I’ll try ProtonMail because of their efforts for security and privacy. It’s paid (if you need the features I need), but it seems to be worth it. If one day I finally have time and strength to finally start setting it all up, I’ll let you know how it went.
Update: Turns out ProtonMail doesn’t have an option to keep you logged in, it just cleans your session after you close the tab, even on a trusted device. Seriously. That’s just laughable! Users keep requesting it, and ProtonMail keeps ignoring them. Since I’m using a password manager and two-factor authentication, that’s a total deal breaker for me.
I’ve switched to Tutanota. So far it looks just as nice, and it’s even 4x cheaper. The transition was smooth and way easier than I expected (setting up an MX and a TXT record on the DNS). So far, I’m pretty happy with it.
I never had a notes app that I was fully happy with. Currently I’m using Google Keep, but it fucks up the synchronization pretty often, leaving me with outdated or missing notes.
Except... I was happy with one notes app, but this one I wrote myself. It was hiding right behind the left border of the screen and would slide out if your mouse went it that area, so it was always just a mouse move away. It was the times when I didn’t have to sync it between devices though.
But actually... Why not? Why not write my own thing?
So... the migration is still in progress.
Update: Developing that project is taking me quite a while (mostly because I'm focusing on other things and not doing this one at all), so I've settled for a corporate solution. And it looks like Apple's Notes work very well and synchronise without problems.
I’m not using Google Drive that much, but still... I think that when I figure out the synchronisation for my notes app, I could just as well use it to sync files as well – most probably hosted on Amazon S3. Let’s see how that goes, keep fingers crossed! 🤞🏼
Update: I've supported Cubbit on Kickstarter. It looks really promissing & revolutionary! It's still at its early stages, though. Time will tell, if it was worth it.
I’m fine with keeping some Google tools. For instance Authenticator – it doesn’t store any personal data, it just uses a standardised algorithm to generate time-dependent access codes. There’s plenty of compatible apps that can replace it – but I’d have to to go all the websites where I use 2FA and regenerate the tokens. Nah, too much work, not worth it.
Update: After switching from Android to iPhone I had to re-configure the MFA anyway, so I decided to find an alternative for Google Authenticator. And there it was – Authy. It has a way better UI – with icons and colors to more easily select which account you want to log in to – and it allows you to share your access tokens between multiple devices, making it way easier to migrate to a new device, to use your computer when your phone is not around, and to recover when you lose your device.
The ad revenue from my websites is laughable (it didn’t even reach the minimum for payout, and I had to start over to switch currency and country), but I keep them just in case. One time a post of mine got so popular it almost broke my server, but I had no ads in place at the time... #tyleprzegrać
Still, seems like AdSense is the simplest (auto ads 😍), most advanced and most seamless ad platform I could find. And with my level of “revenue” it doesn’t really matter, which one I use. So screw it for now.
Update: I've decided to only keep AdSense on the three websites where they make some profit: oursong.eurovote.eu, generator.avris.it & naked-adventure.eu. And instead I'm trying out BAT-based advertisment in the Brave Browser 😍
Update: I found out about arc.io and it's looking great so far! Instead of showing ads, it asks your users to (seamlessly) be nodes in their CDN. The revenue is really good 😍
Plenty of my favourite content is on Youtube and nowhere else, so there’s no way for me to stop using it. But screw it.
Although, if I were uploading some videos myself (without needing a popular platform, just hosting), I’d definitely go for some other platform, probably Vimeo.
Update: It's still a fresh project, but looks really promissing! Basically, it's an independent, open-source YouTube client that only uses Google's servers for what's really necessary. No need for a Google account to keep track of your subscriptions, no tracking, no ads. And it's written in VueJS, when I had a problem importing my YT subscriptions, I just fixed it myself and submitted a pull request 😉
I also got a subscription for Nebula. It's really cheap and it lets me support a lot of my favourite youtubers directly, not via a huge corporate proxy, plus it's without any ads and with exclusive content too!
Google Maps are good. Apple Maps seem to be better already, but they’re not available in a browser or on Android, so absolutely not for me. Screw it.
Update: After migrating to iPhone I now use Apple Maps there. Also, DuckDuckGo is now using Apple Maps for their search results. Hopefully, a standalone web version of Apple Maps will also be available soon.
Update: After switching to MacBook Pro, I can now use Apple Maps on desktop as well 🎉. Also, I remembered that I have a project that uses Google Maps heavily, Naked Adventure. I took some time recently to rewrite it from scratch. The new version switched completely to Apple Maps. And it looks gorgeous! 🥰
Same. Translate is good, I’m keeping it.
Update: DeepL Translator seems to be doing just as good of a job as Google Translate, except without tracking you 👍 Also, their MacOS app is just amazing! Just press Ctrl+C
twice to translate any text, and click one button to insert the translation back. So comfortable! I used to use my own project for similarly easy access ( Vocabus - Dictionary at your fingertips!), but it was just a dictionary, not a translator. Such an improvement!
That’s a tricky one. There’s plenty that annoys me in Google Photos, and it’s definitely risky to give them access to all your pictures, but on the other hand... they offer unlimited space. Unlimited! Consider me bought, Google...
Edit: TBH, there was another reason I was reluctant to move my photos anywhere outside Google: because they are a complete mess, and migrating would force me to finally clean it up.
I was postponing that for a long while, but last weekend I finally swallowed that pill. I've spent two evenings going through 30 GB of data, assigning each picture to an album, removing trash, separating out all the nudes and porn… Finally, I'm done!
Apple Photos might cost me a bit (2.99€/m), but at least I don't store my most private data on the servers of a company notorious for crapping on privacy. Plus it has a slightly better interface and algorithms, that's nice.
Tricky as well. I’ve used iPhones and MacBooks that my companies provided, and I was really satisfied with them. Just not enough to actually pay that much to get one for myself. Though this year I might actually end up switching to an iPhone, who knows.
Update: Aaaand I did. I needed a new phone anyway, and since my husband had tested iPhone XS on himself and is totally in love with it, I decided to switch as well.
Update: I honestly forgot to mention Calendar here before. I stayed with Google there, but after getting an iPhone I decided to switch their calendar app as well. I don’t see any advantages or disadvantages of iCloud Calendar over Google Calendar yet, except maybe the Apple one being less messy in its settings. But well, at least it’s yet another area where I got Google-free.
Btw, a tip: if you want to transfer the events from Google to Apple, export them to an .ics
file, and then mail it to yourself. When you open the attachment on your iPhone, it will let you import all the events (just use the Mail app, for some reason this doesn’t work on Tutanota).
Update: I forgot about this one as well, since that migration I performed a long time ago already. I don’t know, if it’s still relevant today, but if you’re looking for a way to transfer your music from one to another, you might want to check out my old post: Exporting playlists from Google Play Music to Spotify
Before, I just briefly mentioned reCAPTCHA in the last part. But now that I've found a perfect replacement for it, it deserves a separate caption.
Google offers reCAPTCHA for “free”, but actually uses it to train machine learning models and to track you on non-google websites that use their tool.
hCaptcha, on the other hand, does not track you, and they even share their revenue from training ML with website owners that use their tool. hCaptcha is just as accurate and user-friendly as reCAPTCHA – can totally recommend!
Anyways... You can check out nomoregoogle.com, it collects alternatives to different Google products. Let’s keep it diverse! 😉
Recently, I’ve realised once again how omnipresent is Google. It can track us even when we don’t use Google. Any website that serves ads from Google, uses Google Analytics, reCAPTCHA, Google Maps, etc. etc. (so almost every website, including, regrettably, some of mine – not anymore 🎉) executes scripts from Google’s domains, which gives Google access to your activity on that website, while they also have access to their own cookies. They see almost your every move online!
Solution: log out from Google and remove all their cookies. Same for Facebook and other companies that make money off your privacy, that you’ve stopped using, but still keep accounts open for whatever reason.
It’s hard. With Google it’s all or nothing. Wanna see your Youtube recommendations? Too bad, we’re also going to automatically log you in to Gmail, GA, Keep, Photos, and... most of the websites on the internet...
But it’s doable. I just did. I stayed logged in in a separate browser used as a sandbox if I ever need it.
Oh, and I also disabled any tracking option I could in my Google account settings, removed my activity and double-checked all of the OAuth apps that had access to my Google Account.
I’m feeling so much less watched over already!
I know I said it's not about boycotting Google entirely, just diversifying my tools, but daaaamn it's addictive. Every piece of data taken away from Google feels like a victory 😅
So… I just wanted to annouce: as of today I'm (almost) entirely Google-free. I will keep one piece of Google that I can't really get rid of: YouTube - but via a sandboxed app on mobile and a sandbox browser on desktop. All other that's left is a question of time: I'm waiting for a new Apple TV release, to replace my Chromecast (one of the most annoying pieces of hardware I ever owned), and for the release of iOS 14 that will come with a translator (or maybe DeepL will create a mobile app till then?).
So pretty much: mission accomplished 🥂🍾
]]>The PHP ecosystem is full of frameworks: Symfony, Laravel, Yii, Zend, Phalcon, and so many, many, many more... All of them built by professionals and supported by big communities. So why on earth would a junior developer, who has just started his first job, try his hand in building yet another one?
Well, here’s why:
When I learned PHP, the URLs I knew how to build and use looked like this: /index.php?module=user&action=show&id=123
. I had no idea, how to make them nice like this: /user/show/123
. Symfony knew though, and I could just use it.
But I was used to coding things from scratch and understanding how they work internally, I didn’t like the magic of “it just works”. I know, stupid, if I were still doing that, I’d never finish any project. At the time it did make sense, though.
I was curious if I can replicate the behaviour. I stole the .htaccess
config from Symfony, I’ve expanded my knowledge about regular expressions, and eventually, I did it! It was buggy and ugly, but it worked and it was mine, and I was so proud of it!
And then I couldn’t stop. Every time I had had to dig into Symfony’s code and had gotten overwhelmed by its complexity, I realised how simple the overall logic actually is, even though it’s overblown by trying to keep it generic and open for modification/extension. So just like I did with the router, I also rewrote from scratch the event dispatcher, the security layer, the DI container and a couple of other components, eventually ending up with a fully usable framework.
I don’t think I have access to its source code anymore, but I think it was only 10 files or so. It was crappy, didn’t comply with PSR-4, wasn’t published as a Composer package, and probably was full of bugs that I’ve never found.
It didn’t really make much sense to further develop it. Nobody’s gonna use it anyway. It’s not gonna be better than any of the frameworks that have plenty of people working on them. I’ve already learned a lot, isn’t it enough to call it a day?
But I took some criticism for the version 0.0 from people who knew better how to code, so I had to address their advice and to make Micrus better. And it turned out to be a great decision because I still had a lot more to learn.
The biggest challenge was managing dependencies. I couldn’t put all the modules like “Mailer”, “Twig”, “Social login”, “CRUD” etc. in one huge package and still call it “Micrus”. But I was developing all of them at once! I had to learn more about the internals of Composer in order to treat local directories as if they were packages, without pushing or tagging them (only later I found out about Repository of type “path” – really useful!). I had to learn how to organise the code better so that adding or removing entire modules could be as seamless as possible. I had to keep them consistent, simple, powerful, extendable. I spent hours and hours, and days and days, on figuring out their architecture...
The second biggest challenge was the forms. They’re just awful. The form abstraction layer has to move data around between the actual object that holds data, a form and it’s HTML representation. It needs to map the data both ways, it needs to validate it, it’s just a mess. At work at Rocket Internet, even though we were using a Phalcon-based framework, many ventures decided to use the standalone Symfony Form component instead of Phalcon forms, because that was the least troublesome one.
Still, they weren’t simple enough for me. Sure, they’re way, way better than handling the forms without any framework, but whenever I needed to do anything even slightly non-standard, I really missed the option of just writing down some HTML. Instead, I had to learn about the internals of the framework. I didn’t like it, so, I’ve spent ages trying to prove to myself, that one can make the best of two worlds.
The same with the security layer. I really hate how Symfony implements it. It’s a piece of cake to configure the most simple cases, they hide everything behind a veil of magic. But as soon as you need something non-standard, you need to go through their documentation, get into their way of thinking, understand their ideas... You can’t just write down the simple logic you need, you need wrappers, voters, providers, listeners, etc. etc. I decided to try to implement it my way.
And it worked for me. I’ve released Micrus • Tiny, yet powerful, PHP framework that later become Micrus v4 • Beauty of simplicity. I’ve built a couple of projects based on Micrus (listed at micrus.avris.it). I had fun, seeing how easy it was for me to build apps with Micrus.
Whenever I would encounter something worth improving, I did so. How exhausting was that! On one hand disappointing (I wasn’t expecting anyone else to use it), on the other really rewarding (it is an accomplishment after all). When I discovered The non-magic of autowiring, I implemented that as well. I covered 100% of my code with unit tests (which was quite a challenging type of code to cover).
But then, Symfony has released its version four. And it’s amazing. It has Flex, it’s flexible, it’s fast... Maybe it’s not as simple as I would wish it to be, but it’s undeniably way better than anything I could ever create myself. Also, I simply got tired of building Micrus already. It grew bigger than I thought it would, it started resembling Symfony more and more, while I understood more and more of Symfony’s design...
So there I am: having learned an awful lot about standards, modularisation, architecture, framework internals, autoloading, autowiring, testing, managing dependencies, and much much more. Having created something irrelevant, but quite impressive nevertheless.
I’m still gonna keep putting Micrus in my CV, I’m still gonna keep most of my Micrus-based-projects running it, but I’m not gonna improve it anymore or start any new projects on it.
It’s been a wonderful adventure. Now it’s time to move on 😊
]]>I’ve lived in three countries so far, and I got some official documents from all of them (Germans definitely spam way more than the others). I think it’s interesting to compare, how different approaches they have to the design of those documents.
Let’s have a gradient background, microprint, watermarks, shitloads of eagles and other things!
We don’t give a fuck, just print it out. Yes, monospace fonts are fine.
We’ll just put a small deep-blue ribbon on a clean design. They’ll know it’s a fucking ROYAL letter!
They’re killing it with that modernised coat-of-arms on that blue background notch 😍 They even put it on government buildings and minister’s twitter accounts.
]]>There is a website I’ve created many years ago, Stosłowia (Polish only), which collects stories of up to a hundred words. It never got any users, but I didn’t really care to promote it in any way either.
Last week I’ve decided to rewrite it from scratch, because so many things were wrong about it – from an ancient backend in plain PHP with hardcoded credentials and no separation of concerns, to login with Facebook (and only Facebook) that stopped working... Now it’s a fresh Symfony 4.1 with Encore with some new features (like automatic screenshot generation, seen for instance on Twitter).
But what I’d like to show you, is how a couple of pretty small design changes have made the whole website way nicer visually (IMHO).
Let’s start with the logo. The old one was the laziest option possible: just an icon of a pencil from Font Awesome v4.
The new one is a modification of free vector icons created by Denis Sazhin from the Noun Project (CC-BY), that looks way more individual. Its elements form a “100”, which relates to the name and the purpose of the website.
Maybe I shouldn’t judge the design, if it’s me who created the website, but since it’s mostly a Bootswatch theme anyway, I’ll dare to say: it’s not bad. I like it. It’s clean and minimalist.
What I don’t like is my own choices: like the jumbotron with random background images, which made the text not-so-readable, even with the addition of a text shadow, or the lack of a footer, which makes the website look somehow... incomplete...
So what I did to polish it up:
Check out the end effect below. What do you think? 😉
It’s honestly diffucult being a webdeveloper in the world of shitty websites. I guess that’s how hairdressers feel when they see my pathetic hair after it’s been a while since my last visit...
But the thing is, even though it’s technically easy to use scissors and clippers, I don’t do that on my own hair, I leave that to the professionals.
It’s great that HTML is so easy to learn and that many schools teach it to children. But its simplicity is also a curse, leaving some people convinced that “they can now do programming” (HTML is a markup language, not a programming language) and that the whole thing is easy.
And, admittedly, writing code that does stuff and solves problems isn’t that hard either, especially if you have Google and StackOverflow on your side. The hard part is to write this code is such a way, that another person can understand it, maintain it and modify it, so that groups of programmers can colaborate on it together, so that it can easily be kept up to date... I’ve learned how to write code when I was twelve. But learning, how to write good, maintainable code, took me years of professional work, and I still have a lot to learn.
It’s also difficult for the professionals to keep up to date with technology, it changes so rapidly. For example, a teacher of webdevelopment at a university (!) was teaching my class how to use HTML4 and how to deal with the ISO-8859-1 encoding, even though both HTML5 and UTF-8 have been standards for years!
The same goes, I guess, for the author of wnbr.nl:
It’s not a bad website. It gives you the information you need, it doesn’t hurt your eyes, it even uses HTML5 and UTF-8 (so much better than my uni...). On the other hand there’s plenty one could do to make it better:
And while it’s completely fine if a personal website or a side project looks like that, I feel really sad when I see a website of a doctor, a business, or a big event, that is so far away in the past... Plenty of people are working on constant improvements of the standards, on new technologies, new approaches, new APIs – and yet, as a user, I’m still left with not being able to make a doctor’s appointment online, as if it was so damn hard to implement... Spending your whole life on digitalising the world bit by bit, but not being able to fully enojoy that digitalisation as a user really sucks...
Anyways... As a person, who wants to make the world (also the cyber-world) a better place, who really enjoyed the WNBR this year and who happened to have some free time on my hands, I contacted the organisers and offered them my help in bringing their website to the 21st century.
I assumed they wouldn’t want it, actually. Somebody created the current website ( nsesoftware.nl), somebody is proud of it, and obviously nobody likes criticism... So I tried to be nice, not to criticise, but to offer some ideas and help.
I got no answer though, and I really needed to start right away, as long as I had free time and the motivation to work on it. I ended up with something like this:
I got nothing but positive feedback from all the friends whom I’ve shown the end result. Still, I got no response from the WNBR people on wheather or not they actually want it. But since it’s almost done already, I’ve sent them a demo, saying that if they do, it’s free for them to use.
After a while I kind of got an answer – not a “no, thanks”, or even “fuck off”... instead, I just got blocked on Twitter. So childish... Well, that’s how you lose like a 1000€ worth of free service, too bad for them.
Anyways... a couple of days ago I stumbled upon the website polyamorynetwork.com. I totally fell in love with their logo 😍 And even though I’m not a polyamorist myself (yet?), I decided to check it out anyway.
I couldn’t, though. Setting aside that the SSL certificate was expired since a month, which makes browsers give the user a scary red security warning... The registration form still uses reCAPTCHA v1, which got shutdown in March 2018 (and deprecated way earlier). That means nobody was able to join their network for the last half a year, and apparently nobody noticed!
Is the website abandoned? Why can’t it just say so? Or maybe there’s a big, active community there, but it’s closed for new members, because the admin doesn’t give a damn anymore? I guess I’ll never know.
On one hand it’s great that anyone can build their own website – that’s what makes the web thrive, that’s what makes it open and equal!
On the other hand though – it requires way more skill and knowledge to do it right. If you want to look professional, keep that in mind.
So what’s my point? I guess I don’t have any, just wanted to complain a bit.
Except maybe for one thing:
If you have (or want to start) a project, event or a non-profit organisation that I could stand behind (LGBTQ rights, human rights, naturism, non-monogamy, education...) that I could support from the IT side, feel free to contact me 😉
]]>I had to learn Git as a programmer. If you want to easily collaborate on a codebase, you really need either Git or something similar. But as a non-programmer, you’ve probably never even heard that name, have you? Then why would you ever need it?
Well, for exactly the same reasons!
It’s a version control system, which means it remembers how the content of a directory (called “repository”) changed over time. You can decide, wich moments in time are saved – those snapshots of the repository are called “commits”. You can send those commits to a server (it’s called “pushing”), so that other people can download them (”pull”) and collaborate and your project (plus the server works as a backup of your data). If you ever want to revert your repository to how it looked like some time ago, or to see how it changed over time, or who wrote a particular line in a file, etc. etc., Git offers you all of that.
Did you notice, how I didn’t use the word “code” anywhere in the above definition? That’s because Git isn’t about code or programming. Yes, it was created to help with development of the Linux Kernel, but it really doesn’t care what kind of files it handles.
Are you a graphic designer, tired of keeping all those files like project.psd
, final.psd
, final2.psd
, final2-FINAL.psd
, because the client might change their opinion one more time and basically ask you to redo your first project once again? Then Git will help you!
Are you a writer, wanting to make a major change in your book, but afraid it will not work out fine, so you’d like to be able to quickly revert that change? Yup, Git is for you!
Are you a person, who wants to track how their CV changed over time, without keeping all the versions as separate files? Just use Git!
Do you want to collaborate on something with your friends, without sending email attachments back and forth or worrying if everyone has the newest version? Git might be exactly what you need!
Well, not here, this is not a tutorial. The Internet already has plenty of those. Pick some and spend some time with it – you won’t regret it!
]]>