GitHub as an identity provider

As I write these words, I have just finally disabled our LDAP/Active Directory server based on Samba which is incredibly hard to update, to backup and to re-create in case of emergencies.

We’ve traditionally used Gmail for Email and Calendar access and LDAP/Active Directory for everything else: The office WLAN, VPN access, internal support tool access, SSH access, you name it.

However, that solution came with operational drawbacks for us:

  • Unix support for Active Directory is flaky at best and the UI that Windows once provided for setting up LDAP attributes required for Unix users has stopped working with Windows 8
  • Samba is hard to install and keep up to date. Setting up an Active Directory domain creates a ton of local state that’s difficult to back up and impossible to put into a configuration management system.
  • The LDAP server being reachable is a prerequisite for authentication on machines to work. This meant that tunnels needed to be created for various machines to have access to that.
  • Each Debian update came with new best practice for LDAP authentication of users, so each time new tools and configuration needed to be learned, tested and applied.
  • As we’re working with contractors giving temporary access to our support tools is difficult because we need to create temporary LDAP accounts for them.
  • Any benefits that could have been had by having workstations use a centralized user account database have evaporated over time as our most-used client OS (macOS) lost more and more central directory support.

On the other hand, everybody at our company has a GitHub account and so do contractors, so we’re already controlling user access via GitHub.

GitHub provides excellent support for account security too, especially once we could force 2FA to be enabled on all users. Also over time, the reliance on locally installed tools grew smaller and smaller when on the other hand, most cloud services were making use of provide OAuth Sign-in-with-Github functionality.

It became clear that stuff would work ever so much better if only we could get rid of that LDAP/Active Directory thing.

So I started a multi-year endeavor to get rid of LDAP. It wouldn’t have needed to be multi-year, but as this happened mostly as side-projects, I tackled them whenever I had the opportunity.

I’m fully aware that this puts us into the position where we’re dependent on our Github subscription to be active. But so does a lot of our daily development work. We use Issues, Pull Requests, GitHub Actions, you name it. While git itself is, of course, decentralized, all the other services provided by GitHub are not.

Then again, while we would be in a really bad spot with regards to our development processes, unhooking our glue code from GitHub and changing it to a traditional username/password solution would be very feasible even in a relative short time-frame (much shorter than the disruption to the rest of our processes).

Which means that I’m ready to increase the dependency on GitHub even more and use them as our identity provider.

The first thing to change was the internal support tools our support team uses to get authenticated access to our sites for support purposes. The interface between that tool and the sites has always been using signed access tickets (think JWT but slightly different, mostly because pre-dating JWT by about 5 years) to give users access. The target site itself did not need access to LDAP, only the support tool needed it to authenticate our support team members.

So unhooking that tool from LDAP and hooking it up to Github was the first step. Github has well-documented support for writing OAuth client apps to authenticate users.

Next was authenticating users to give SSH access to production machines. This was solved by teaching our support tool how to sign SSH public keys and by telling the production machines to trust that CA.

I wrote a small utility in Swift to send an SSH public key to the support tool to have it signed and to install the certificate in the SSH agent. The certificates have a short lifetime ranging from one day to at most one week (depending on user) and using the GitHub API, the central tool knows about team memberships which allows us to confer different permissions on different servers based on team membership.

None of the SSH servers can do all of the certificate validation entirely locally (due to the short lifetime we can live without a CRL), independent of network access (which there is none of for some machines).

Which meant that SSH access is now possible independently of LDAP and even network availability. And it’s using a mechanism that’s very simple and comes with zero dependencies aside of openssh itself.

Then came the VPN. I’ve run IPSec with IKEv2 to provide authenticated access to parts of the production network. It (or rather the RADIUS server it used) needed access to LDAP and even though it was using stock PFSense IPSec support, it was unreliable and needed restarts with some regularity.

This was entirely replaced by a SSH bastion host and ProxyJump in conjunction with above SSH certificates. No more LDAP and production access based on GitHub group membership of GitHub accounts. While it never happened and I would be very wary of allowing it, this would allow us to give selective access to machines to contractors based on nothing but their GitHub account (and who doesn’t have one of those these days).

Behind the production network, there’s another, darker part of the infrastructure. That’s the network where all the remote management interfaces and the virtual machine hosts are connected to. This one is absolutely critical and access to it is naturally very restricted.

The bastion host described above does not have access to that network.

In comes the next hat our support tool/github integration is wearing: It can synchronize Tailscale ACLs with Github and it can dynamically alter the ACL to give temporary access to specific users.

Tailscale itself uses Github as an identity provider too (and also supports custom identity providers, so, again, losing GitHub for this would not be the end of the world) and our support tool uses the GitHub and Tailscale APIs to make sure that only users in a specific GitHub group get access to Tailscale at all.

So everybody who needs network access that’s not doable or not convenient via the SSH bastion host has a Tailscale account (very few) and of those even fewer users are in a GitHub Team that causes our support tool to allow for such users to request temporary (30 min max) access to the super secret backstage network.

Which completely removes the last vestiges of the VPN from the picture and leaves us with just one single dependency: The office wifi.

Even though the office network really isn’t in a privileged position (any more), I want access to that network to be authenticated and I want to be able to revoke access to individual users.

Which is why we have always used Enterprise WPA over RADIUS against Active Directory/SAMBA to authenticate WiFi access to the office network.

This has now been replaced by, you guessed it, our support tool which creates and stores a unique completely random password for each user in a specific Github Team and offers an API endpoint to be used by the freeRADIUS rlm_rest module to authenticate those users. In order to still have Wifi even when our office internet access is unavailable (though I can’t really see why we’d need that given our reliance on cloud-based services), I added a local proxy in front of such API endpoint that serves a stale response in case of errors (for some hours – long enough for us to be able to fix the internet outage but short enough to not let no-longer-authenticated users access the network).

With this last step, our final dependency on LDAP was finally dropped and all our our identity management is now out-sourced to GitHub, so I could finally issue that one last command

shutdown -h now

Tailscale on PFSense

For a bit more than a year, I’m a user of Tailscale which is a service that builds an overlay network on top of Wireguard while relying on OAuth with third party services for authentication

It’s incredibly easy to get going with Tailscale and the free tier they provide is more than good enough for the common personal use cases (in my case: tech support for my family).

Most of the things that are incredibly hard to set up with traditional VPN services just work out of the box or require a minimal amount of configuration. Heck, even more complicated things like tunnel splitting and DNS resolution in different private subnets just work. It’s magic.

While I have some gripes that prevent me from switching all our company VPN connectivity over to them, those are a topic for a future blog post.

The reason I’m writing here right now is that a few weeks ago, Netgate and Tailscale announced a Tailscale package for PFSense. As a user of both PFSense and Tailscale, this allowed me to get rid of a VM that does nothing but be a Tailscale exit node and subnet router and instead use the Tailscale package to do this on PFSense.

However, doing this for a week or so has revealed some very important things to keep in mind which I’m posting about here because other people (and that includes my future self) will run into these issues and some are quite devastating:

When using the Tailscale package on PFSense, you will encounter two issues directly caused by Tailscale, but both of which also seen in other reports when you search for the issue on the internet, so you might be led astray when debugging it.

Connection loss

The first one is the bad one: After some hours of usage, an interface on your PFSense box will become unreachable, dropping all traffic through it. A reboot will fix it and when you then look at the system log, you will find many lines like

arpresolve: can't allocate llinfo for <IP-Address> on <interface>
I’m in so much pain right now

This will happen if one of your configured gateways in “System > Routing” is reachable both by a local connection and through Tailscale by subnet router (even if your PFSense host itself is told to advertise that route).

I might have overdone the fixing, but here’s all the steps I have taken

  • Tell Tailscale on PFSense to never use any advertised routes (“VPN > Tailscale > Settings”, uncheck “Accept subnet routes that other nodes advertise.”
  • Disable gateway monitoring under “System > Routing > Gateways” by clicking the pencil next to the gateway in question.

I think what happens is that PFSense will accidentally believe that the subnet advertised via Tailscale is not local and will then refuse to add the address of that gateway to its local ARP table.

IMHO, this is a bug in Tailscale. It should never mess with interfaces its exposing as a subnet router to the overlay network.

Log Spam

The second issue is not as bad but as the effect is so far removed from the cause, it’s still worth talking about it.

When looking at the system log (which you will do for above issue), you will see a ton of entries like

sshguard: Exiting on signal
sshguard: Now monitoring attacks.
this can’t be good. Can it?

What happens is that PFSense moved a few releases ago from a binary ring-buffer for logging to a more naïve approach to check once a minute whether a log file is too big, then rotating it and restarting daemons logging to that file.

If a daemon doesn’t have a built-in means for re-opening log files, PFSense will kill and restart the daemon, which happens to be the case for sshguard.

So the question is: Why is the log file being rotated every minute? This is caused by the Tailscale overlay network and the firewall by default blocking Tailscale traffic (UDP port 41641) to the WAN interface and also by default logging every dropped packet.

In order to fix this and assuming you trust Tailscale and their security update policies (which you probably should given that you just installed their package on a gateway), you need to create a rule to allow UDP port 41641 on the WAN interface.

much better now

This, too, IMHO is a bug in the Tailscale package: If your package opens port 41614 on an interface on a machine whose main purpose is being a firewall, you should probably also make sure that traffic to that port is not blocked.

With these two configuration changes in place, the network is stable and the log spam has gone away.

What’s particularly annoying about these two issues is that Googling for either of the two error messages will yield pages and pages of results, none of which apply because they will have many more possible causes and because Tailscale is a very recent addition to PFSense.

This is why I decided to post this article in order to provide one more result in Google and this time combining the two keywords: Tailscale and PFSense, in the hope of helping fellow admins who run into the same issues after installing Tailscale on their routers.

After seven years, the Apple Watch experience still is a mess

Seven years ago, in 2015, the Apple Watch was released and quickly switched focus from a personal communication device with some fitness support to a personal fitness device with anciliary functionality.

Every year since then, Apple released a new version of its watchOS operating system, adding some new features, but most of the time, what was added felt like how Software and Hardware development was done up until the early 2000s where features were made to fill bullets lists, not to actually be used.

To this day, the Apple Watch is a device that nearly gets there but even the basic functionality is hampered by bugs, inconsistencies and features which exist on paper but just plain don’t work in reality.

I am a heavy user of the Apple Watch, and daily I stumble over some 100% reproducible issue that I don’t expect to stumble over in an Apple product, much less one with such a pinpoint focus on a specific use case.

My “user story” matches exactly what the Watch was designed for: I’m wearing the watch all-day to know the time, to get silent notifications and the silent alarm clock. And once a day, I’m going on a running workout while listening to podcasts without taking my phone with me.

I’m a nerd, so I tend to be hung up in a case of XKCD 1172, but none of this user-story feels off what the watch was designed for.

But now, let me guide you through my day of 100% reproducible annoyances which have been present since the respective feature was added to the Watch (multiple years ago) and which to this day have not been addressed and which start to drive me up the wall to the point that I’m now sitting down and writing a long-form article.

First, let me give you context about the Apps and Hardware involved in my setup:

  • I’m using Overcast as my Podcast app. It has a hard-time syncing podcasts due to 3rd party API restrictions, but it’s still better than the built-in Podcasts app because that one cannot even sync the playback order of playlists (native podcasts was a feature added to the watch in 2018) and most of the time, syncing episodes did not work, worked only partially (some episodes downloaded, some not), or appeared to work (progress bar shown in the watch app on the phone but not moving). Streaming over LTE works (at the understandable huge cost of battery life), but even then, I have to manually select the next episode to play because it does not sync playback order of playlists (called “Station” in Apple Podcasts terms)
  • I’m using AirPods Pro as my headphones.
  • I’m tracking my runs using the built-in Workouts app (because that one is more equal than others with regards to what features it can have compared to third-parties) not because it’s better in general.

That’s all.

The trouble starts before I start my workout: Sometimes I want to add a specific podcast to the playlist I’m listening to. Because Overcast (see above for the reasoning why Overcast) only allows to sync one playlist, this means it will have to download that episode.

So I open Overcast and watch it start downloading.

Which is very slow because watchOS decides to use my phone as a proxy over a Bluetooth connection (this was the case since 2015).

I have WiFi enabled, but the Watch doesn’t auto-join my UniFi based Wifi (it works at home, but not at the COVID-related “home-office” location). All other Apple devices join fine. Wifi is a Watch feature since 2015.

But even if I manually join the WiFi (which works fine), watchOS will not stop using Bluetooth, so that won’t improve download speeds (this too was the case since 2015).

Also, because I switched to the settings app, Overcast was force-quit by the OS due to resource constraints, so when I go back to it, the download will be marked as “Failed” and I have to start it again.

So my daily routine before a run when I want to listen to a specific episode of a podcast that has not yet been downloaded for whatever reason, is as follows:

  • go into settings
  • disable bluetooth
  • manually join wifi
  • open Overcast and start the download.
  • Continously tap the screen in order to make sure the app is not terminated while the download is ongoing

You could list this as Overcast’s fault, but other Podcast players on the platform, most notable Apple’s native one also have similar to identical problems (with Apple’s offering being spectacularly bad in that it doesn’t work right even aside of the connectivity problems).

OK. Now I’m ready to run. So I select my episode on Overcast and hit play. But all I get is a prompt to chose an output device. Of course I forgot to re-enable Bluetooth which I fix and after a very long wait time, finally, one the AirPod in my right ear connects to the watch and starts playing the podcast. The left one remains silent until I take it out of my ear and put it back in (this doesn’t happen every time, but it does happen regularly enough for me to write it down here).

As I sit down again at my office chair to put on my running shoes, I accidentally bump the table which causes the mouse to move and the computer to wake up again. Thanks to automatic device switching (a feature added in 2020), the mac immediately grabs the AirPods back from my Watch to play the silence it’s currently playing (this one happens every. single. time).

So I go back to the watch and press play. Another 20s of waiting while the Watch negotiates the handover with my mac and I’m back to my podcast.

Finally it’s time to leave.

I’m in the very privileged position to work right next to where I’m running, so as I start to run, I’m initially going in and out of WiFi range. Every time the watch connects and disconnects of the WiFi, the audio stutters and sometimes breaks off completely (bug present since 2015).

So I stop and disable WiFi.

But now I’m running. Finally.

The workout itself is fine (with the exception of the display issue that if the workout is auto-paused due to me stopping to tie my shoelaces and then resumed, the screen will say “paused” in the top left, but the running animation and timer will still be running – this is a regression in watchOS 8, released in 2021) until the point that I’m getting a notification from a message I want to reply to.

I bought a Series 7 Watch based on their presentation of the new QWERTY keyboard feature for right this purpose.

Unfortunately, the message I got is in German (I live in the german speaking parts of Switzerland after all) and I want to reply in German. The new signature feature of the Series 7 Watch is not available in any language but English though which nobody told me beforehand, so it’s back to either scribble and only being sporadically able to type umlauts or dictation where I can watch the device in real-time bungling my speech into a ridiculous word salad it wants me to send off. The watch is much worse at dictation than the phone.

There’s no reason for the QWERTY keyboard to be Series 7 exclusive but to make more money for Apple which is also why they touted it as a signature feature of this new hardware generation.

They could at least have bothered to make it useable for the majority of the people on this planet (which speak a language other than English).

Anyways – back to the run. It starts to rain and after having had a half-marathon cancelled unbeknownst to me by a wet shirt hitting the touch screen in the past (why not warn me over my headphones if you detect I’m still moving? Ah right. There’s no weather in California, so this problem doesn’t happen), I enable the key lock feature.

After I reach the destination of my run, I want to stop the workout, so I turn to crown to disable the key lock. As that feature was invented for swimming, the loudspeaker of the watch starts playing a sound to eject water that entered the speaker.

All well and good, but also, while playing that sound over the built-in speaker, Bluetooth audio stops. Why? I don’t know, but this misfeature has been present since key lock was introduced in 2016. Sometimes, the audio starts again, sometimes it doesn’t.

But that doesn’t matter anyways, because the moment I’m back in WiFi or Blueooth range with my phone, clearly what needs to happen is that audio needs to stop and Bluetooh needs to be transferred back to my phone which is currently playing… nothing. Also, while transferring audio from phone to the watch takes multiple tens of seconds, the way back is instant.

This always happended here and then before the automatic switching was added in 2020, but since then, it happens every time.

So here you have it. Bug after bug after annoyance every single day. Many of the features I was talking about were added after the initial release of the Watch and were used to coax me into spending money to upgrade to new hardware.

But none of these features work correctly. Some of them just don’t work at all, some of them only work sometimes.

Over the last seven years, the underlying hardware has gotten better and better. The CPU is multiple times faster than it was in 2015. There’s mutliple times more memory available. The battery is larger, there’s more storage available. Marketing has graduated the watch from being a companion of the phone to being a mostly self-sufficient internet-connected device.

Why are apps still being killed after mere milliseconds in the background? Why are apps only awoken rarely to do actions in the background? Apps I have installed manually and I’m using all the time. Why are data transfers from the watch to the phone still basically a crapshoot and if they do work, slow as molasses? Why is Bluetooth audio still hit and miss 6 years after the last iPhone with an audio jack has been released? Why did Series 7 launch with a signature feature only available for a small portion of the planet when there’s no regulatory needs to do so?

The product is supposed to delight and all it does is frustrate me with reproducible and 100% avoidable issues every single day.

This isn’t about wishing for 3rd party apps to have more capabilities. This isn’t about wishing the hardware to do things it’s not advertised to be doing. This isn’t about the frustrating development experience for 3rd parties. This isn’t about sometimes having to reset the watch completely because a feature stopped working suddenly – that happens too, but rarely enough for me to not mind.

This is about first-party features advertized for nearly a decade working only partially or not working at all when all I’m doing is using the product exactly as the marketing copy is telling me I should be using it.

Apple, please allocate the resources the watchOS platform so desperately needs and finally make it so your excellent hardware product can live up to its promise.

Sensational AG is hiring (again)


Sensational AG
 is the company I founded together with a collegue back in 2000. Ever since then, we had a very nice combination of fun, interesting work and a very successful business.

We’re a very small team – just eight programmers, one business guy, a product designer and a bloody excellent project manager. Me personally, I would love to keep the team as small and tightly-knit as possible as that brings huge advantages: No internal politics, a lot of freedoms for everybody and mind-blowing productivity.

I’m still amazed to see what we manage to do with our small team time and time again and yet still manage to keep the job fun. It’s not just the stuff we do outside of immediate work, like UT2004 matches, Cola Double Blind Tests, Drone Flights directly from the roof of our office, sometimes hosting JSZurich and meetups for the Zurich Clojure User group and much more – it’s also the work itself that we try to make as fun as possible for everybody.

We are looking for a new member to help us with technical support and smaller scale modifications to our main product, though there’s ample opportunity to grow into helping with bigger projects and getting ownership over pieces of our code-base.

Our main product is an ecommernce platform that’s optimized for wholesale customers. We’re not about presenting a small amount of product in the most enticing manner, but we’re into helping our end users to be as efficient and quick as possible to deal with their big orders (up to 400 line items per week).

Our customers have relatively large amounts of data for us to handle (the largest data set is 2.3 TB in size). I’m always calling our field “medium data” – while it might still fit into memory, it’s definitely too big to deal with it in the naïve way, so it’s not quite big-data yet, but it’s certainly in interesting spheres.

We’re in the comfortable position that the data entrusted to us is growing in the speed that we’re able to learn how to deal with it and so is our architecture. What started as a simple PHP-in-front-of-PostgreSQL deal back in 2004 by now has grown to a cluster of about 40 machines: Job queue servers, importer servers, application servers, media servers, event forwarding servers; because we are hosing our infrastructure for our customers, we can afford to go the extra mile to do things technically interesting and exciting.

Speaking of infrastructure: We own the full stack of our product: Our web application, its connected micro services, our phone apps, our barcode reading apps, but also our backend infrastructure (which is kept up to date by Puppet)

While our main application is a beast of 300k lines of PHP code, we still strive to use the best tool for their jobs and in the last years have grown our infrastructure with tools we have written in Rust, Clojure, JavaScript (via Node.js) and of course our mobile apps are written in their native languages Swift and Java with more and more Kotlin.

We try to stay as current as possible even with our core PHP code. We have upgraded to PHP 7.4 the day it came out and we’re already running PHP 8.0 beta 3 in our staging and development environments, ready to upgrade the day PHP 8 will come out – those of us who write PHP are already excited about the new features coming to 8.0.

As strong believers in Open Source, whenever we come across a bug in our dependencies, we fix it and publish it upstream. Many of our team members have had their patches merged into PHP, Rust, Tantivy and others. Giving back is only fair (and of course also helps us with future maintenance).

If this sounds interesting to you and you want to help us make it possible for our end users to leave their workplace earlier because ordering is so much easier, then ping me at jobs@sensational.ch.

You should be familiar with working on bigger Software projects and understanding of software maintainability over the years. We hardly ever start fresh, but we constantly strive to keep what we have modern and up to speed with wherever technology goes.

You will be initially mostly working on our PHP and JS (ES2020) code-base, but if you’re into another language and it will help you solve a problem you’re having or your skill in a language we’re already working with can help us solve a problem, then you’re more than welcome to help.

If you have UNIX shell experience, that’s a bigger plus, though it’s not required, but you will just have to learn the ropes a bit.

All our work is tracked in git and we’re extremely into beautiful commit histories and thus heavy users of the full feature-set that git offers. But don’t worry – so far, we’ve helped everybody get up to speed.

And finally: As a mostly male team – after all, we only have one woman working on our team of developers, we’d especially love if more women would find their way into our team. All of us are very aware how difficult it is for minorities to find a comfortable working environment they can add their experiences to and where they can be themselves.

In defense of «macOS 10.15 Vista»

With the release of macOS 10.15 Catalina, people are all up in arms about the additional security popups, comparing it to what happened when Windows Vista introduced UAC and its constant prompting for Administrator permission.

I can understand where people are coming from, I do have a slightly contrarian opinion which I would like to voice here as this requires more space than what a comment field on some third-party site offers me.

First, after you read the article I linked above, keep in mind that while these prompts after the first boot after the upgrade are certainly very annoying, there’s a difference to Windows Vista and later:

UAC constantly prompts for elevation when elevation is needed, but the macOS permission given out with the prompts is persistent. Once you have authorized an application, the authorization remains and the same prompt will not appear for the same application.

The screenshot presented in the original article happens after the first boot after the upgrade when a lot of applications are launched the first time. None of the prompts seen in the screenshots will ever appear again.

Blanket permission

OK. But the prompts are still annoying. Isn’t there a way how the OS could ask ahead of time and the user could blanket allow all requests?

That would be cool but it could not possibly work without requiring changes to be made to applications: The applications installed on your machine expect to be able to get access to the things the OS now prompts for permission. In most cases, this even involves synchronous API calls, so the application is suspended while the OS is waiting for user input on the permission prompt.

Finally, knowing ahead of time what APIs an application is going to use is impossible to know, so it’s impossible to list the things an application needs ahead of time. You could run static analysis on a binary, but it would be full of false positives (scaring the user with accesses an application doesn’t need) and false negatives (still showing dialogs).

For an ahead-of-time permission request, an app would need to declare the permissions it needs and then also be prepared for API calls to fail, even though they used to always succeed (and might not even have an option to signal an error to the caller). This means apps need to be updated.

And you know what: At least for some of the features (namely filesystem related things), such a declaration is now possible via the application’s .plist file though, guess what, nobody updated their applications for catalina yet

Off-switch

Fine, so the apps aren’t updated yet. Why isn’t there a way for me to turn this off?

There is a way though: If you boot from the recovery partition (by holding Cmd-R while turning the machine on), you can configure system integrity prevention and gatekeeper to your liking using the command line tool csrutil

Macs with system integrity prevention disabled will not to any of this prompting.

Oh – but disabling system integrity prevention is a security issue? Well – so is letting applications roam free on your disk, control other windows or read keystrokes not sent to themselves.

Oh – but why do I have to reboot to disable this? I want an UI to be able to do so in the running system. If you allow this, then «helpful» applications will silently do that for you which means Apple wouldn’t even had to bother implementing SIP to begin with.

Ok. But why does it have to be such a complicated command-line tool? In order to protect users from themselves. This is a very powerful sledgehammer. With great power comes great responsibility and by making the steps required as complicated as possible, the likelihood it’s going to give somebody pause before blindly following the steps presented by the «Flash Player Installer» increases.

In conclusion

I think the prompts are annoying, but once you’ve gone though the initial flood, they appear very rarely. For me it was a mild inconvenience, but even though I consider myself a somewhat technical user, I love the protection of SIP. In light of ever more devious dark patterns and phishing attempts (that last link was on HN the same day as the article complaining about Catalina, btw).

Longer-term I wish that privacy sensitive APIs will all get asynchronous and will all require declaration ahead of time (like Android – but there, people are complaining too) and I wish that applications would update to these APIs or be forced into adopting them (causing another slew of articles about Apple breaking thousands of old applications), but in the mean time, I’m gladly accepting a prompt here and then if it means I’m harder to phish and harder to have my data exfiltrated from.

Fun* with SwiftUI Beta 5 and 6

After a lot of momentum in Beta 4 where I finally got my rogue smide.ch client for the watch to work, Beta 5 and 6 were a bit of a letdown with regards to real-world usability.

Already with Beta 4, Apple has deprecated @ObjectBinding and BindableObject, both of which total staples of SwiftUI and totally required for you to do any kind of meaningful application because they provide the glue by which you hook your UI up to you actual application.

With this, even the last pieces of sample code shown at WWDC sessions about SwiftUI were now invalidated. What a breakneck speed of development.

On the other hand, this was also a case of parallel evolution inside Apple because all the old pair provided was also provided by the Combine framework with latter having the advantage of actually be usable not just in SwiftUI but anywhere in your applications.

So I can completely understand the reasoning behind the deprecation. If you are willing to clean this up, then the beta period is the time to do it.

However in Beta 4 on watchOS, while the old method was deprecated, the new way only worked partially: If you changed your class that was previously inheriting from BindableObject to now inherit from the correct Combine.ObservableObject, none of your publishes would actually be picked up by SwiftUI and the UI would remain static.

So in Beta 4, even though it was deprecated, I kept using BindableObject because that’s what was working. @ObjectBinding on the other hand, I could replace with @ObservedObject

But then, Beta 5 happened and the Beta 4 app crashed on startup.

Trying to compile it lead to a linker error because BindableObject now was gone for good. Note that the compiler was still just complaining about it being deprecated, but at link time, the symbol was missing and linking failed. This would also explain the crash at startup of the old Beta 4 app.

I’ve quickly replaced BindableObject with Combine.ObservableObject which made the app build again and run fine – on the simulator

On the real hardware, it would continue crashing on launch.

Even after installing the logging profile on the watch in order to get some information via the Console, all I got was a single log line entry from Carousel complaining about the launched process shutting down.

As this is just a fun project after all, this is where I stopped again, waiting what further betas will bring.

After a while, Beta 6 came and went. It brought no change.

Then, Beta 7 happened, but I didn’t even bother trying to recompile without an updated Xcode which finally happened today and, spoiler alert, my App is back in a running state. No further changes were required.

So it all wasn’t my fault after all.

Next time I’ll talk about the changes I’ve done since Beta 7 and Xcode Beta 6

Fun with SwiftUI – Beta 4

After hitting yet another wall with Beta 3, the moment Beta 4 was released I’ve updated all my devices, recompiled the application and looked at it on the watch.

This time, without any change on my part, things just fell into place:

Location permission worked correctly. I had my list of Bikes sorted by location

What I didn’t like as much though were all the deprecation warnings I was already accumulating. During this years Beta Period, none of the APIs presented at WWDC were finalized. SwiftUI and Combine are still very much in flux and the APIs change on a biweekly basis.

At the time of this writing, Beta 5 has already removed all older deprecations and has added even more deprecations. We’re now at a point where most of the videos from WWDC sessions that are explaining SwiftUI and combine are not applicable to the real world any more, but that’s for another post.

Some lipstick on the pig

With things mostly working right, there was one thing that was bugging me in the list of bikes you’re seeing in the screenshot above: The distances to my current location were manually formatted in meters, but I knew that Apples platforms come with very good locale dependent unit formatters, so I wanted to fix that.

MeasurementFormatter has a .naturalScale unit option that’s supposed to pick scale and unit automatically dependent on the user’s locale. In case of distances, here in Switzerland, I would expect a distance to be shown in meters up until 1 km at which point I would expect a distance to be shown in km with one fractional digit of accuracy.

But that’s now what MeasurementFormatter does: It insisted on using km and it insisted on three fractional digits of accuracy. That’s why I’ve decided to format on my own. But I knew there must be a proper solution.

It tuns out, there is – but it’s part of MapKit, not of Foundation: There’s MKDistanceFormatter to use for this and while MeasurementFormatter has a unitOptions property, the MKDistanceFormatter has a unitStyle property which you can set to .abbreviated to get to the proper effect.

So I have added that and also used battery icons based on SFSymbols to display the bikes battery levels like so:

we’ll never know why there’s no battery.50 image. Only .100, .25 and .0

Reactive warts in SwiftUI

Remember when I said that the whole UI hierarchy was going to be built from one parent SwiftUI view that would decide on a state object? That’s the design I totally went with:

My main ContentView is just a big switch statement, completely exchanging its subview hierarchy dependent on what the global state handler object thinks should be currently active.

As you can see in above code, there’s only .ListingBikes – there’s no state to list a single bike. That’s fine, because that’s up to that BikeList view to decide to instead of listing a list of bikes it wants to show a single bike.

I did this using a NavigationLink nee NavigationButton setting its destination to the detail view:

What’s nice is that you get a free sliding animation and a free back button out of this. However, what’s not as nice is that if you do this, the detail view gets pushed as another native view on top of the existing view.

Which means that even when the big switch statement from the screenshot above causes a different sub-view to be rendered (and it does get rendered), then the additional view pushed by the NavigationLink remains shown on top and does not get closed.

In the end, here on Beta 4, I went with a NavigationDestinationLink so I could first close the detail view and then tell the state handler I wanted to create a booking.

At the time of this writing (Beta 5 has just been released), NavigationDestinationLink is already deprecated again and the state whether the destination is showing or not can be passed as a $binding, however, also at the time of this writing, this currently messes with the free back button

Another thing that falls in the same bucket of re-painting the whole hierarchy does not in fact repaint the whole hierarchy is a SwiftUI View’s navigationBarTitle modifier: if you set that once on any subview, it will persist even through you removing that subview and replacing it by another which doesn’t use the navigationBarTitle modifier.

Meaning that setting a property on a subview as an effect on global state.

This feels wrong to me.

First booking

Anyways – enough complaining. With all of this in place, I did my very first commute with a smide bike using nothing but my watch. Here you see me at the end of the trip, ready to end the booking:

That felt great.

What doesn’t feel great is the mess Beta 5 made out of my nicely layouted UI. But that’s for another day.

Fun with SwiftUI – Beta 3

After being abruptly stopped last time by a compilation step that would not complete in a finite amount of time, I let the project rest until Beta 3 was released.

I could probably have found my mistake, but I was also willing to give it another two weeks and then see whether the compiler would just tell me what I was doing wrong.

Which, when Beta 3 was released, it actually did. Trying to recompile the project would immediately be stopped at a clear error message (I don’t actually remember the details any more), but the fix was very easy after all.

Motivated to finally move on, I finished hooking up my state handler to the UI itself and I was finally at a point where I would run this skeleton in the simulator.

Simulator says: nope.

First the good news: Apple was right when they proudly said that they have improved the watch simulator workflow: Launching the simulator in stand-alone mode finally is a sub-second endeavor and so is actually launching your app in the debugger.

Working this way is actual honest-to-god fun.

Yes, things should just work like this out of the box, but until now, they never did: Running the watch simulator meant also running the phone simulator and proxying all debugger operations through the phone simulator, including breaking connections and horrible, horrible lag.

But none of this still happens in WatchOS 6: The simulator can run on its own and it launches instantly. No connection issues.

At least not to talk to Xcode…

My initial excitement about things working so well was abruptly dampened by the fact that all network access I was trying to do in the simulator ended up failing with a generic error and in the log output some backend component would complain about losing connection to the background transfer service.

Of course I first assumed to be the source of the problem and I spent two hours trying to find out what I was doing wrong.

I shouldn’t have, because my last resort was to check the Apple Beta forums and there, I didn’t even have to bother posting a question: Others had the issue too and the solution is to just use a real device.

Onwards to the real device

Updating a watch to a Beta OS is a tricky proposition: There is no (official) way to ever downgrade and stuff is known to be shaky.

Also, my watch is the one single computer I use that produces data for which I have no backup and re-creating the data is a (literally) painful experience.

I’m talking about workout data.

For two years now I’m running 10ish km every day and while I know rationally that the actual act of running is what counts, unfortunately, my subconscious only accepts a run as having happened when it’s also tracked in the Activity app and when the rings are closed.

So would I dare updating to a beta version knowing that I can’t downgrade and that the watch is producing irreplaceable data that I heavily rely on?

Of course I would. 🤓

But only after checking our local electronics retailer to make sure that they had a replacement watch on stock if worse comes to worst. Yes. I know that you can ask Apple to downgrade a bricked/unsuable watch, but that would mean days without the watch and days without my runs being tracked.

Inacceptable.

Anyways. Updating to watchOS 6 went fine and a small test walk around my house has shown that tracking workouts was still generally working fine. So I was all set to try it on the real device.

Moving forward on the device

The good news: While not as fast as the simulator, deploying and debugging on the watch still is considerably quicker and more reliable than it ever was on any previous combination of Xcode, iOS and watchOS.

Debugging still involves proxying though the phone, but now it’s reliable. Over the course of 4 weeks of doing this (spoiler alert), I only had one or two instances where Xcode wouldn’t talk to the watch and more and I had to restart my computer, my phone and my watch to get connectivity restored.

Judging by other people’s prior experiences, this is a huge step forward.

The other good news: Network requests to indeed work on the real device. My client could fetch a JWT token from the smide.ch service and it could get a list of currently available bikes.

Impressive rendering speed

I have chosen the most naïve implementation possible and just fed the whole list of ~200 bikes directly into the UI framework. No dealing with cell reuse, no limiting the size of the list, nothing. Just “hey SwiftUI, please render this list of 200 bikes”.

And render it does: It’s quick and scrolling through the whole list is buttery smooth without doing any kind of optimization work. And once the next roadblock is fixed (see below) and the list gets dynamically re-sorted as my location changes, that too is buttery smooth.

I’m getting away with telling the framework that the list has changed and needs repainting and I just pass it a new updated list of bikes. The change is instantaneous. Even though it’s a new list of 200 items.

This is so much fun. I shouldn’t need to care about minimizing updates. I shouldn’t need to care about cell reuse. I shouldn’t need to deal with this. And with SwiftUI, I don’t have to.

Location roadblocks

Excited, I moved forward to asking for location access and using location to sort the list of bikes.

And this is where things ground to a halt.

Whenever my independent watch app extension would be launched, I would be calling CLLocationManager.authorizationStatus() which would tell me that my status was .notDetermined, so I would ask for permission.

My delegate callback would be called with .authorizedWhenInUse, but CLLocationManager.authorizationStatus() would still return .notDetermined and all attempts at calling location specific API would be ignored.

As this was my first strides into CoreLocation, I assumed this to be my fault and spent a lot of time debugging this, moving code around and trying out things, but not matter what I did, the effects didn’t change.

Then I tried Apple’s Sample Code from 2016 which of course worked fine even after I changed the integrated watch app to be usable independently.

After a few more hours of trial and error, I finally was able to pin it down though: In Beta 3 (and presumably earlier Betas too), the CoreLocation permission management is broken if your watch app is a completely independent watch app.

If it has a companion iOS app, then requesting location permission is fine, but when you have a watch app without any iOS app which has a plist that looks like this:

<key>WKWatchOnly</key>
<true/>

Then requesting location permission would trigger a race condition where your permission is simultaneously granted and not granted.

I could have caved and made an empty iOS companion app at this point, but I decided to report this issue using Feedback Assistant and call it another two weeks.

The relief I felt when I’ve seen Apple’s official code sample to fail the same way as my sample code did the moment I set that WKWatchOnly flag was one hell of a feeling.

I wasn’t doing it wrong. I wasn’t losing my mind.

Next time, things will finally fall into place, but only after dealing with deprecations.

Fun with SwiftUI – Beta 2

After spending the first two weeks of the beta period to get a foundation going, I was eager to start working on the actual watch app.

This was right about the time when Beta 2 hit, so first I’ve upgraded to that and then started with the Watch project.

Building the UI

Eager to play around with SwiftUI, the first thing I have done was to actually just create a skeleton UI:

What immediately sprang to my mind as I was working on this was the fact that the built-in preview feature of SwiftUI forces you to keep your views self-contained and to keep the dependencies small and to keep your data easily mockable.

Otherwise you will suddenly be in the position where your Xcode preview requires working network connections and a lot of application state.

I’ve also learned that Beta 2 was still on very shaky grounds before running the actual code even once: My attempts to display a map view caused Xcode to crash completely the moment it tried to paint the UI preview, so I’ve stubbed that out to just be a rectangle

But overall, designing (if you can call it that) the skeleton UI went very quickly (a matter of a few hours) and I was eager to hook everything up.

A hard stop

After working on the UI, the next step was to produce a backend that orchestrates the actual application state. This single class is the only thing that keeps track of state in the application and based on which the UI decides what to paint and how and where the UI will call into in order to change the overall state (for example when the user logs in or when they start a booking)

This is what (at the time) you would use @ObjectBinding and BindableObject for.

My next step, thus, was to create what I called the ApplicationStateHandler which I had implement the BindableObject protocol.

That handler itself would expose a state property which could have one of various values of an ApplicationState enum. The main SwiftUI view would basically be a huge select statement over that state property and then decide what actual view to render based on the state.

This was my plan, but no matter what I did, the moment I had ApplicationStateHandler implement the BindableObject protocol, I would put Xcode 11 Beta 2 in a state where it was using 100% of each of my 8 CPU cores while trying to compile my code.

So in the end, I wasn’t stopped by incomprehensible error messages (I got my share of those too), but by a compilation run that did not seem to want to complete in finite time.

Instead of solving the halting problem, I decided to wait another two weeks because I already had other non-project related things on my plate.

Stay tuned for next time to see what stopped me hard in Beta 3

Fun with SwiftUI – Beta 1

As explained before, I’ve decided to scratch my own itch and write an independent Apple Watch client for the smide.ch bike sharing service.

The first step to getting from the idea to the final watch app wasn’t actually involving the Watch at all: Before I could get started, I needed to know how the existing smide clients actually work and how to talk to their server.

Then I wanted to have a unit-tested library that I could use from the Watch Frontend.

On top of that library, I wanted to have a command-line client for easier debugging of the library itself.

And only then would I start working on the frontend on the watch.

Preliminaries

So as the Developer Beta 1 for XCode 11, WatchOS 6 and Catalina rolled out, the first few days of development I spent reverse-engineering the official Smide Client.

As always, the easiest solution was to just de-compile their Android Client and lo and behold, they are making use of retrofit to talk to their server which lead to a very nice and readable interface documentation right in my decompiler

Armed with this information, a bit of grepping through the rest of the decompiled code and my trusty curl client, I was able to document the subset of the API that I knew I was going to need for the minimal feature-set I wanted to implement.

In order to have a reference for the future, I have documented the API for myself in the OpenAPI format

This is useful documentation for myself and if I should ever decide to make the source code of this project available, then it’ll be useful for anybody else wanting to write a Smide client.

Moving to XCode: SmideKit

Now that I had the API documentation I needed, the next step was to start getting my SmideKit library going.

Even though there are tools out there that generate REST clients automatically based on an OpenAPI spec, all the tools I looked at produce code that relies on third-party libraries, often Alamofire. As XCode 11 was in a very rough shape already on its own, I wanted to minimize the dependencies on third-party libraries, so in the end, I’ve opted to write my own thin wrapper on top of URLSession

The SmideKit library

SmideKit is a cross-platform (by Apple’s definition) library in that the code itself works across all of Apple’s OSes, but there are individual targets for the individual OSes

But by manually setting the Bundle Name to $(PRODUCT_NAME) in the individual Info.plist files, I can make sure that all projects can just import SmideKit without any suffixes.

As this library is the most crucial part of the overall project, I have written unit testes for all methods to make sure we correctly deal with expiring tokens, unresponsive servers and so on.

The command line client

The first user of SmideKit would be a macOs command-line frontend called smidecli. It would offer various subcommands for listing bikes, booking them and ending bookings.

Here’s a screenshot of me booking a bike

Going from nowhere to the working command-line client has taken me the whole period of Beta 1. Two weeks is a long time but between my actual day job and my newly put upon me parenting duties, my time was a bit limited.

Still. It felt good to go from nowhere to writing a library, writing a command-line frontend and then actually using it to book a bike. On the other hand: None of the code written at this point had anything to do with the announcements of WWDC. All work done could just as well have been done on the old SDKs. But still: Having a good foundation to stand on, I was sure was going to pay off.

Next time: Adventures in Beta 2