Installing LineageOS 16 on a Samsung SM-T710 (gts28wifi)

Posted by Craige McWhirter on
  1. Check the prerequisites
  2. Backup any files you want to keep
  3. Download LineageOS ROM and optional GAPPS package
  4. Copy LineageOS image & additional packages to the SM-T710
  5. Boot into recovery mode
  6. Wipe the existing installation.
  7. Format the device
  8. Install LineageOS ROM and other optional ROMs.

0 - Check the Prerequisites

  • The device already has the latest TWRP installed.
  • Android debugging is enabled on the device
  • ADB is installed on your workstation.
  • You have a suitably configured SD card as a back up handy.

I use this android.nix to ensure my NixOS environment has the prerequisites install and configured for it's side of the process.

1 - Backup any Files You Want to Keep

I like to use adb to pull the files from the device. There are also other methods available too.

$ adb pull /sdcard/MyFolder ./Downloads/MyDevice/

Usage of adb is documented at Android Debug Bridge

2 - Download LineageOS ROM and optional GAPPS package

I downloaded lineage-16.0-20191001-UNOFFICIAL-gts28wifi.zip from gts28wifi.

I also downloaded Open GApps ARM, nano to enable Google Apps.

I could have also downloaded and installed LineageOS addonsu and addonsu-remove but opted not to at this point.

3 - Copy LineageOS image & additional packages to the SM-T710

I use adb to copy the files files across:

$ adb push ./lineage-16.0-20191001-UNOFFICIAL-gts28wifi.zip /sdcard/
./lineage-16.0-20191001-UNOFFICIAL-gts28wifi.zip: 1 file pushed. 12.1 MB/s (408677035 bytes in 32.263s)
$ adb push ./open_gapps-arm-9.0-nano-20190405.zip /sdcard/
./open_gapps-arm-9.0-nano-20190405.zip: 1 file pushed. 11.1 MB/s (185790181 bytes in 15.948s)

I also copy both to the SD card at this point as the SM-T710 is an awful device to work with and in many random cases will not work with ADB. When this happens, I fall back to the SD card.

4 - Boot into recovery mode

I power the device off, then power it back into recovery mode by holding down [home]+[volume up]+[power].

5 - Wipe the existing installation

Press Wipe then Advanced Wipe.

Select:

  • Dalvik / Art Cache
  • System
  • Data
  • Cache

Swipe Swipe to Wipe at the bottom of the screen.

Press Back to return to the Advanced Wipe screen.

Press the triangular "back" button once to return to the Wipe screen.

6 - Format the device

Press Format Data.

Type yes and press blue check mark at the bottom-right corner to commence the format process.

Press Back to return to the Advanced Wipe screen.

Press the triangular "back" button twice to return to the main screen.

7 - Install LineageOS ROM and other optional ROMs

Press Install, select the images you wish to install and swipe make it go.

Reboot when it's completed and you should be off and running wtth a brand new LineageOS 16 on this tablet.

Deploying TT-RSS on NixOS

Posted by Craige McWhirter on

NixOS Gears by Craige McWhirter

Deploying a vanilla Tiny Tiny RSS server on NixOS via NixOps is fairly straight forward.

My preferred method is to craft a tt-rss.nix file describes the configuration of the TT-RSS server.

tt-rss.nix:

{ config, pkgs, lib, ... }:

{

  services.tt-rss = {
    enable = true;                                # Enable TT-RSS
    database = {                                  # Configure the database
      type = "pgsql";                             # Database type
      passwordFile = "/run/keys/tt-rss-dbpass";   # Where to find the password
    };
    email = {
      fromAddress = "news@mydomain";              # Address for outgoing email
      fromName = "News at mydomain";              # Display name for outgoing email
    };
    selfUrlPath = "https://news.mydomain/";       # Root web URL
    virtualHost = "news.mydomain";                # Setup a virtualhost
  };

  services.postgresql = {
    enable = true;                # Ensure postgresql is enabled
    authentication = ''
      local tt_rss all ident map=tt_rss-users
    '';
    identMap =                    # Map the tt-rss user to postgresql
      ''
        tt_rss-users tt_rss tt_rss
      '';
  };

  services.nginx = {
    enable = true;                                          # Enable Nginx
    recommendedGzipSettings = true;
    recommendedOptimisation = true;
    recommendedProxySettings = true;
    recommendedTlsSettings = true;
    virtualHosts."news.mydomain" = {                        # TT-RSS hostname
      enableACME = true;                                    # Use ACME certs
      forceSSL = true;                                      # Force SSL
    };
  };

  security.acme.certs = {
      "news.mydomain".email = "email@mydomain";
  };

}

This line from the above file should stand out:

              passwordFile = "/run/keys/tt-rss-dbpass";   # Where to find the password

The passwordFile option requires that you use a secrets file with NixOps.

Where does that file come from? It's pulled from a secrets.nix file (example) that for this example, could look like this:

secrets.nix:

{ config, pkgs, ... }:

{
  deployment.keys = {
    # Database key for TT-RSS
    tt-rss-dbpass = {
      text        = "vaetohH{u9Veegh3caechish";   # Password, generated using pwgen -yB 24
      user        = "tt_rss";                     # User to own the key file
      group       = "wheel";                      # Group to own the key file
      permissions = "0640";                       # Key file permissions
    };

  };
}

The file's path /run/keys/tt-rss-dbpass is determined by the elements. So deployment.keys determines the initial path of /run/keys and the next element tt-rss-dbpass is a descriptive name provided by the stanza's author to describe the key's use and also provide the final file name.

Now that we have described the TT-RSS service in tt-rss_for_NixOps.nix and the required credentials in secrets.nix we need to pull it all together for deployment. We achieve that in this case by importing both these files into our existing host definition:

myhost.nix:

    {
      myhost =
        { config, pkgs, lib, ... }:

        {

          imports =
            [
              ./secrets.nix                               # Import our secrets
              ./servers/tt-rss_for_NixOps.nix              # Import TT-RSS description
            ];

          deployment.targetHost = "192.168.132.123";   # Target's IP address

          networking.hostName = "myhost";              # Target's hostname.
        };
    }

To deploy TT-RSS to your NixOps managed host, you merely run the deploy command for your already configured host and deployment, which would look like this:

    $ nixops deploy -d MyDeployment --include myhost

You should now have a running TT-RSS server and be able to login with the default admin user (admin: password).

In my nixos-examples repo I have a servers directory with some example files and a README with information and instructions. You can use two of the files to generate a TT-RSS VM to take a quick poke around. There is also an example of how you can deploy TT-RSS in production using NixOps, as per this post.

If you wish to dig a little deeper, I have my production deployment over at mio-ops.

Deploying and Configuring Vim on NixOS

Posted by Craige McWhirter on

NixOS Gears by Craige McWhirter

I had a need to deploy vim and my particular preferred configuration both system-wide and across multiple systems (via NixOps).

I started by creating a file named vim.nix that would be imported into either /etc/nixos/configuration.nix or an appropriate NixOps Nix file. This example is a stub that shows a number of common configuration items:

vim.nix:

with import <nixpkgs> {};

vim_configurable.customize {
  name = "vim";   # Specifies the vim binary name.
  # Below you can specify what usually goes into `~/.vimrc`
  vimrcConfig.customRC = ''
    " Preferred global default settings:
    set number                    " Enable line numbers by default
    set background=dark           " Set the default background to dark or light
    set smartindent               " Automatically insert extra level of indentation
    set tabstop=4                 " Default tabstop
    set shiftwidth=4              " Default indent spacing
    set expandtab                 " Expand [TABS] to spaces
    syntax enable                 " Enable syntax highlighting
    colorscheme solarized         " Set the default colour scheme
    set t_Co=256                  " use 265 colors in vim
    set spell spelllang=en_au     " Default spell checking language
    hi clear SpellBad             " Clear any unwanted default settings
    hi SpellBad cterm=underline   " Set the spell checking highlight style
    hi SpellBad ctermbg=NONE      " Set the spell checking highlight background
    match ErrorMsg '\s\+$'        "

    let g:airline_powerline_fonts = 1   " Use powerline fonts
    let g:airline_theme='solarized'     " Set the airline theme

    set laststatus=2   " Set up the status line so it's coloured and always on

    " Add more settings below
  '';
  # store your plugins in Vim packages
  vimrcConfig.packages.myVimPackage = with pkgs.vimPlugins; {
    start = [               # Plugins loaded on launch
      airline               # Lean & mean status/tabline for vim that's light as air
      solarized             # Solarized colours for Vim
      vim-airline-themes    # Collection of themes for airlin
      vim-nix               # Support for writing Nix expressions in vim
    ];
    # manually loadable by calling `:packadd $plugin-name`
    # opt = [ phpCompletion elm-vim ];
    # To automatically load a plugin when opening a filetype, add vimrc lines like:
    # autocmd FileType php :packadd phpCompletion
  };
}

I then needed to import this file into my system packages stanza:

  environment = {
    systemPackages = with pkgs; [
      someOtherPackages   # Normal package listing
      (
        import ./vim.nix
      )
    ];
  };

This will then install and configure Vim as you've defined it.

If you'd like to give this build a run in a non-production space, I've written vim_vm.nix with which you can build a VM, ssh into afterwards and test the Vim configuration:

$ nix-build '<nixpkgs/nixos>' -A vm --arg configuration ./vim_vm.nix
...
$ export QEMU_OPTS="-m 4192"
$ export QEMU_NET_OPTS="hostfwd=tcp::18080-:80,hostfwd=tcp::10022-:22"
$ ./result/bin/run-vim-vm-vm

Then, from a another terminal:

$ ssh nixos@localhost -p 10022

And you should be in a freshly baked NixOS VM with your Vim config ready to be used.

There's an always current example of my production Vim configuration in my mio-ops repo.

Deploying Gitea on NixOS

Posted by Craige McWhirter on

NixOS Gitea by Craige McWhirter

I've been using GitLab for years but recently opted to switch to Gitea, primarily because of timing and I was looking for something more lightweight, not because of any particular problems with GitLab.

To deploy Gitea via NixOps I chose to craft a Nix file (example) that would be included in a host definition. The linked and below definition provides a deployment of Gitea, using Postgres, Nginx, ACME certificates and ReStructured Text rendering with syntax highlighting.

version-management/gitea_for_NixOps.nix:

    { config, pkgs, lib, ... }:

    {

      services.gitea = {
        enable = true;                               # Enable Gitea
        appName = "MyDomain: Gitea Service";         # Give the site a name
        database = {
          type = "postgres";                         # Database type
          passwordFile = "/run/keys/gitea-dbpass";   # Where to find the password
        };
        domain = "source.mydomain.tld";              # Domain name
        rootUrl = "https://source.mydomaain.tld/";   # Root web URL
        httpPort = 3001;                             # Provided unique port
        extraConfig = let
          docutils =
            pkgs.python37.withPackages (ps: with ps; [
              docutils                               # Provides rendering of ReStructured Text files
              pygments                               # Provides syntax highlighting
          ]);
        in ''
          [mailer]
          ENABLED = true
          FROM = "gitea@mydomain.tld"
          [service]
          REGISTER_EMAIL_CONFIRM = true
          [markup.restructuredtext]
          ENABLED = true
          FILE_EXTENSIONS = .rst
          RENDER_COMMAND = ${docutils}/bin/rst2html.py
          IS_INPUT_FILE = false
        '';
      };

      services.postgresql = {
        enable = true;                # Ensure postgresql is enabled
        authentication = ''
          local gitea all ident map=gitea-users
        '';
        identMap =                    # Map the gitea user to postgresql
          ''
            gitea-users gitea gitea
          '';
      };

      services.nginx = {
        enable = true;                                          # Enable Nginx
        recommendedGzipSettings = true;
        recommendedOptimisation = true;
        recommendedProxySettings = true;
        recommendedTlsSettings = true;
        virtualHosts."source.MyDomain.tld" = {                  # Gitea hostname
          enableACME = true;                                    # Use ACME certs
          forceSSL = true;                                      # Force SSL
          locations."/".proxyPass = "http://localhost:3001/";   # Proxy Gitea
        };
      };

      security.acme.certs = {
          "source.mydomain".email = "anEmail@mydomain.tld";
      };

    }

This line from the above file should stand out:

              passwordFile = "/run/keys/gitea-dbpass";   # Where to find the password

Where does that file come from? It's pulled from a secrets.nix file (example) that for this example, could look like this:

secrets.nix:

    { config, pkgs, ... }:

    {
      deployment.keys = {
        # An example set of keys to be used for the Gitea service's DB authentication
        gitea-dbpass = {
          text        = "uNgiakei+x>i7shuiwaeth3z";   # Password, generated using pwgen -yB 24
          user        = "gitea";                      # User to own the key file
          group       = "wheel";                      # Group to own the key file
          permissions = "0640";                       # Key file permissions
        };
      };
    }

The file's path /run/keys/gitea-dbpass is determined by the elements. So deployment.keys determines the initial path of /run/keys and the next element gitea-dbpass is a descriptive name provided by the stanza's author to describe the key's use and also provide the final file name.

Now that we have described the Gitea service in gitea_for_NixOps.nix and the required credentials in secrets.nix we need to pull it all together for deployment. We achieve that in this case by importing both these files into our existing host definition:

myhost.nix:

    {
      myhost =
        { config, pkgs, lib, ... }:

        {

          imports =
            [
              ./secrets.nix                               # Import our secrets
              ./version-management/gitea_got_NixOps.nix   # Import Gitea
            ];

          deployment.targetHost = "192.168.132.123";   # Target's IP address

          networking.hostName = "myhost";              # Target's hostname.
        };
    }

To deploy Gitea to your NixOps managed host, you merely run the deploy command for your already configured host and deployment, which would look like this:

    $ nixops deploy -d MyDeployment --include myhost

You should now have a running Gitea server and be able to create an initial admin user.

In my nixos-examples repo I have a version-management directory with some example files and a README with information and instructions. You can use two of the files to generate a Gitea VM to take a quick poke around. There is also an example of how you can deploy Gitea in production using NixOps, as per this post.

If you wish to dig a little deeper, I have my production deployment over at mio-ops.

Replacing a NixOS Service with an Upstream Version

Posted by Craige McWhirter on

NixOS Hydra Gears by Craige McWhirter

It's fairly well documented how to replace a NixOS service in the stable channel with one from the unstable channel.

What if you need to build from an upstream branch that's not in either of stable or unstable channels? This is how I go about it, including building a VM in which to test the result.

I specifically wanted to test the new hydra-notify service, so to test that, I need to replace the existing Hydra module in nixpkgs with the one from upstream source. Start by checking out the hydra source:

$ git clone https://github.com/NixOS/hydra.git

We Can configure Nix to replace the nixpkgs version of Hydra with a build from hydra/master.

You can see a completed example in hydra_notify.nix but the key points are that we need to disable Hydra in the standard Nix packages:

  disabledModules = [ "services/continuous-integration/hydra/default.nix" ];

as well as import the module definition from the Hydra source we downloaded:

  imports =
    [
      "/path/to/source/hydra/hydra-module.nix"
    ];

and we need to switch services.hydra to services.hydra-dev in two locations:

  networking.firewall.allowedTCPPorts = [ config.services.hydra-dev.port 80 443 ];

  services.hydra-dev = {
    ...
  };

With these three changes, we have swapped out the Hydra in nixpkgs for one to be built from the upstream source in hydra_notify.nix.

Next we need to build a configuration for our VM that uses the replaced Hydra module declared in hydra_notify.nix. This is hydra_vm.nix, which is a simple NixOS configuration, which importantly includes our replaced Hydra module:

  imports =
    [
      ./hydra_notify.nix
    ];

to give this a run yourself, checkout nixos-examples and change to the services/hydra_upstream directory:

$ git clone https://code.mcwhirter.io/craige/nixos-examples.git
$ cd  nixos-examples/services/hydra_upstream

After updating the path to Hydra's source, We can then build the VM with:

$ nix-build '<nixpkgs/nixos>' -A vm --arg configuration ./hydra_vm.nix

Before launching the VM, I like to make sure that it is provided with enough RAM and both hydra's web UI and SSH are available by exporting the below Qemu options:

$ export QEMU_OPTS="-m 4192"
$ export QEMU_NET_OPTS="hostfwd=tcp::10443-:443,hostfwd=tcp::10022-:22"

So now we're ready to launch the VM:

./result/bin/run-hydra-notications-vm

Once it has booted, you should be able to ssh nixos@localhost -p 10022 and hit the Hydra web UI at localhost:10443.

Once you've logged into the VM you can run systemctl status hydra-notify to check that you're running upstream Hydra.

NixOS Appears to be Always Building From Source

Posted by Craige McWhirter on

NixOS Gears by Craige McWhirter

One of the things that NixOS and Hydra make easy is running your own custom cache of packages. A number of projects and companies make use of this.

A NixOS or Nix user can then make use of these caches by adding them to nix.conf for Nix users or /etc/nixos/configuration.nix for NixOS users.

What most people will want, is for their devices to have access to both caches.

If you add the new cache "incorrectly", you may suddenly find your device building almost everything from source, as I did.

The default /etc/nix/nix.conf for NixOS users has these default lines:

substituters = https://cache.nixos.org
...
trusted-public-keys = cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY=

Many projects running custom caches will advise NixOS users to add a stanza like this to /etc/nixos/configuration.nix:

{
  nix = {
    binaryCaches = [
      "https://cache.my-project.org/"
    ];
    binaryCachePublicKeys = [
      "cache.my-project.org:j/Kb+r+tGeM+4YZH+ECfTr+b4OFViKHaciuIOHw1/DP="
    ];
  };
}

If you add this stanza to your NixOS configuration, you will end up with a nix.conf that looks like this:

...
substituters = https://cache.my-project.org/
...
trusted-public-keys = cache.my-project.org:j/Kb+r+tGeM+4YZH+ECfTr+b4OFViKHaciuIOHw1/DP=
...

Which will result in your systems only pulling cached packages from that cache and building everything else that's missing.

If you want to take advantage of what a custom cache is providing but not lose the advantages of the primary NixOS cache, your stanza in configuration.nix needs to looks like this:

{
  nix = {
    binaryCaches = [
      "https://cache.nixos.org"
      "https://cache.my-project.org/"
    ];
    binaryCachePublicKeys = [
      "cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY="
      "cache.my-project.org:j/Kb+r+tGeM+4YZH+ECfTr+b4OFViKHaciuIOHw1/DP="
    ];
  };
}

You will now get the benefit of both caches and your nix.conf will now look like:

...
substituters = https://cache.nixos.org https://cache.my-project.org/
...
trusted-public-keys = cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY= cache.my-project.org:j/Kb+r+tGeM+4YZH+ECfTr+b4OFViKHaciuIOHw1/DP=
...

The order does not matter, I just feel more comfortable putting the cache I consider "primary" first. The order is determined by NixOS, using the cache-info file from each Hydra cache:

$ curl https://cache.nixos.org/nix-cache-info
StoreDir: /nix/store
WantMassQuery: 1
Priority: 40

If you were experiencing excessive building from source and your intention was to draw from two caches, this should resolve it for you.

Installing Your First Hydra

Posted by Craige McWhirter on

NixOS Hydra Gears by Craige McWhirter

Hydra is a Nix-based continuous build system. My method for configuring a server to be a Hydra build server, is to create a hydra.nix file like this:

# NixOps configuration for machines running Hydra

{ config, pkgs, lib, ... }:

{

  services.postfix = {
    enable = true;
    setSendmail = true;
  };

  services.postgresql = {
    enable = true;
    package = pkgs.postgresql;
    identMap =
      ''
        hydra-users hydra hydra
        hydra-users hydra-queue-runner hydra
        hydra-users hydra-www hydra
        hydra-users root postgres
        hydra-users postgres postgres
      '';
  };

  networking.firewall.allowedTCPPorts = [ config.services.hydra.port ];

  services.hydra = {
    enable = true;
    useSubstitutes = true;
    hydraURL = "https://my.website.org";
    notificationSender = "my.website.org";
    buildMachinesFiles = [];
    extraConfig = ''
      store_uri = file:///var/lib/hydra/cache?secret-key=/etc/nix/my.website.org/secret
      binary_cache_secret_key_file = /etc/nix/my.website.org/secret
      binary_cache_dir = /var/lib/hydra/cache
    '';
  };

  services.nginx = {
    enable = true;
    recommendedProxySettings = true;
    virtualHosts."my.website.org" = {
      forceSSL = true;
      enableACME = true;
      locations."/".proxyPass = "http://localhost:3000";
    };
  };

  security.acme.certs = {
      "my.website.org".email = "my.email@my.website.org";
  };

  systemd.services.hydra-manual-setup = {
    description = "Create Admin User for Hydra";
    serviceConfig.Type = "oneshot";
    serviceConfig.RemainAfterExit = true;
    wantedBy = [ "multi-user.target" ];
    requires = [ "hydra-init.service" ];
    after = [ "hydra-init.service" ];
    environment = builtins.removeAttrs (config.systemd.services.hydra-init.environment) ["PATH"];
    script = ''
      if [ ! -e ~hydra/.setup-is-complete ]; then
        # create signing keys
        /run/current-system/sw/bin/install -d -m 551 /etc/nix/my.website.org
        /run/current-system/sw/bin/nix-store --generate-binary-cache-key my.website.org /etc/nix/my.website.org/secret /etc/nix/my.website.org/public
        /run/current-system/sw/bin/chown -R hydra:hydra /etc/nix/my.website.org
        /run/current-system/sw/bin/chmod 440 /etc/nix/my.website.org/secret
        /run/current-system/sw/bin/chmod 444 /etc/nix/my.website.org/public
        # create cache
        /run/current-system/sw/bin/install -d -m 755 /var/lib/hydra/cache
        /run/current-system/sw/bin/chown -R hydra-queue-runner:hydra /var/lib/hydra/cache
        # done
        touch ~hydra/.setup-is-complete
      fi
    '';
  };
  nix.trustedUsers = ["hydra" "hydra-evaluator" "hydra-queue-runner"];
  nix.buildMachines = [
    {
      hostName = "localhost";
      systems = [ "x86_64-linux" "i686-linux" ];
      maxJobs = 6;
      # for building VirtualBox VMs as build artifacts, you might need other
      # features depending on what you are doing
      supportedFeatures = [ ];
    }
  ];
}

From there it can be imported in your configuration.nix or NixOps files like this:

{ config, pkgs, ... }:

{

  imports =
    [
      ./hydra.nix
    ];

...
}

To deploy hydra, you will then need to either run nixos-rebuild switch on the server or use nixops deploy -d my.network.

The result of this deployment, via NixOps can be seen at hydra.mcwhirter.io.

Setting Up Wireless Networking with NixOS

Posted by Craige McWhirter on

NixOS Gears by Craige McWhirter

The current NixOS Manual is a little sparse on details for different options to configure wireless networking. The version in master is a little better but still ambiguous. I've made a pull request to resolve this but in the interim, this documents how to configure a number of wireless scenarios with NixOS.

If you're going to use NetworkManager, this is not for you. This is for those of us who want reproducible configurations.

To enable a wireless connection with no spaces or special characters in the name that uses a pre-shared key, you first need to generate the raw PSK:

$ wpa_passphrase exampleSSID abcd1234
network={
        ssid="exampleSSID"
        #psk="abcd1234"
        psk=46c25aa68ccb90945621c1f1adbe93683f884f5f31c6e2d524eb6b446642762d
}

Now you can add the following stanza to your configuration.nix to enable wireless networking and this specific wireless connection:

networking.wireless = {
  enable = true;
  userControlled.enable = true;
  networks = {
    exampleSSID = {
      pskRaw = "46c25aa68ccb90945621c1f1adbe93683f884f5f31c6e2d524eb6b446642762d";
    };
  };
};

If you had another WiFi connection that had spaces and/or special characters in the name, you would configure it like this:

networking.wireless = {
  enable = true;
  userControlled.enable = true;
  networks = {
    "example's SSID" = {
      pskRaw = "46c25aa68ccb90945621c1f1adbe93683f884f5f31c6e2d524eb6b446642762d";
    };
  };
};

If you need to connect to a hidden network, you would do it like this:

networking.wireless = {
  enable = true;
  userControlled.enable = true;
  networks = {
    myHiddenSSID = {
      hidden = true;
      pskRaw = "46c25aa68ccb90945621c1f1adbe93683f884f5f31c6e2d524eb6b446642762d";
    };
  };
};

The final scenario that I have, is connecting to open SSIDs that have some kind of secondary method (like a login in web page) for authentication of connections:

networking.wireless = {
  enable = true;
  userControlled.enable = true;
  networks = {
    FreeWiFi = {};
  };
};

This is all fairly straight forward but was non-trivial to find the answers too.

Generating an sha256 Hash for Nix Packages

Posted by Craige McWhirter on

NixOS Gears by Craige McWhirter

Let's say that you're working to replicate the PureOS environment for the Librem 5 phone so that you can run NixOS on it instead and need to package calls. Perhaps you just want to use Nix to package something else that isn't packaged yet.

When you start digging into Nix packaging, you'll start to see stanzas like this one:

src = fetchFromGitLab {
  domain = "source.puri.sm";
  owner = "Librem5";
  repo = pname;
  rev = "v${version}";
  sha256 = "1702hbdqhfpgw0c4vj2ag08vgl83byiryrbngbq11b9azmj3jhzs";
};

It's fairly self explanatory and merely a breakdown of a URL into it's component parts so that they can be reused elsewhere in the packaging system. It was the generation of the sha256 hash that stumped me the most.

I'd not been able to guess how it was generated. I was not able to find clear instructions in the otherwise pretty thorough Nix documentation.

Putting clues together from a variety of other blog posts, this is how I eventually came to generate the correct sha256 hash for Nix packages:

Using the above hash for libhandy, I was able to test the method I'd come up with, using nix-prefetch-url to download the tagged version and provide an sha256 hash which I could compare to one in the existing libhandy default.nix file:

$ nix-prefetch-url --unpack https://source.puri.sm/Librem5/libhandy/-/archive/v0.0.10/libhandy-v0.0.10.tar.gz
unpacking...
[0.3 MiB DL]
path is
'/nix/store/58i61w34hx06gcdaf1x0gwi081qk54km-libhandy-v0.0.10.tar.gz'
1702hbdqhfpgw0c4vj2ag08vgl83byiryrbngbq11b9azmj3jhzs

Low and behold, I have matching sha256 hashes. As I'm wanting to create a package for "calls", I now safely do the same against it's repository on the way to crafting a Nix file for that:

$ nix-prefetch-url --unpack https://source.puri.sm/Librem5/calls/-/archive/v0.0.1/calls-v0.0.1.tar.gz
unpacking...
[0.1 MiB DL]
path is '/nix/store/3c7aifgmf90d7s60ph5lla2qp4kzarb8-calls-v0.0.1.tar.gz'
0qjgajrq3kbml3zrwwzl23jbj6y62ccjakp667jq57jbs8af77pq

That sha256 hash is what I'll drop into my nix file for "calls":

src = fetchFromGitLab {
  domain = "source.puri.sm";
  owner = "Librem5";
  repo = pname;
  rev = "v${version}";
  sha256 =
  0qjgajrq3kbml3zrwwzl23jbj6y62ccjakp667jq57jbs8af77pq";
};

Now we have an sha256 hash that can be used by Nix to verify source downloads before building.

Installing NixOS on a Headless Raspberry Pi 3

Posted by Craige McWhirter on

NixOS Raspberry Pi Gears by Craige McWhirter

This represents the first step in being able to build ready-to-run NixOS images for headless Raspberry Pi 3 devices. Aarch64 images for NixOS need to be built natively on aarch64 hardware so the first Pi 3, the subject of this post, will need a keyboard and mouse attached for two commands.

A fair chunk of this post is collated from NixOS on ARM and NixOS on ARM/Raspberry Pi into a coherent, flowing process with additional steps related to the goal of this being a headless Raspberry Pi 3.

Head to Hydra job nixos:release-19.03:nixos.sd_image.aarch64-linux and download the latest successful build. ie:

 $ wget https://hydra.nixos.org/build/95346103/download/1/nixos-sd-image-19.03.172980.d5a3e5f476b-aarch64-linux.img

You will then need to write this to your SD Card:

# dd if=nixos-sd-image-19.03.172980.d5a3e5f476b-aarch64-linux.img of=/dev/sdX status=progress

Make sure you replace "/dev/sdX" with the correct location of your SD card.

Once the SD card has been written, attach the keyboard and screen, insert the SD card into the Pi and boot it up.

When the boot process has been completed, you will be thrown to a root prompt where you need to set a password for root and start the ssh service:

[root@pi-tri:~]#
[root@pi-tri:~]# passwd
New password:
Retype new password:
passwd: password updated successfully

[root@pi-tri:~]# systemctl start sshd

You can now complete the rest of this process from the comfort of whereever you normally work.

After successfully ssh-ing in and examining your disk layout with lsblk, the first step is to remove the undersized, FAT32 /boot partition:

# fdisk -l /dev/mmcblk0
Disk /dev/mmcblk0: 7.4 GiB, 7948206080 bytes, 15523840 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x2178694e

Device         Boot  Start      End  Sectors  Size Id Type
/dev/mmcblk0p1 *     16384   262143   245760  120M  b W95 FAT32
/dev/mmcblk0p2      262144 15522439 15260296  7.3G 83 Linux


# echo -e 'a\n1\na\n2\nw' | fdisk /dev/mmcblk0

Welcome to fdisk (util-linux 2.32.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): Partition number (1,2, default 2):
The bootable flag on partition 1 is disabled now.

Command (m for help): Partition number (1,2, default 2):
The bootable flag on partition 2 is enabled now.

Command (m for help): The partition table has been altered.
Syncing disks.

# fdisk -l /dev/mmcblk0
Disk /dev/mmcblk0: 7.4 GiB, 7948206080 bytes, 15523840 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x2178694e

Device         Boot  Start      End  Sectors  Size Id Type
/dev/mmcblk0p1       16384   262143   245760  120M  b W95 FAT32
/dev/mmcblk0p2 *    262144 15522439 15260296  7.3G 83 Linux

Next we need to configure NixOS to boot the basic system we need with ssh enabled, root and a single user and disks configured correctly. I have this example file which at the time of writing looked like this:

# This is an example of a basic NixOS configuration file for a Raspberry Pi 3.
# It's best used as your first configuration.nix file and provides ssh, root
# and user accounts as well as Pi 3 specific tweaks.

{ config, pkgs, lib, ... }:

{
  # NixOS wants to enable GRUB by default
  boot.loader.grub.enable = false;
  # Enables the generation of /boot/extlinux/extlinux.conf
  boot.loader.generic-extlinux-compatible.enable = true;

  # For a Raspberry Pi 2 or 3):
  boot.kernelPackages = pkgs.linuxPackages_latest;

  # !!! Needed for the virtual console to work on the RPi 3, as the default of 16M doesn't seem to be enough.
  # If X.org behaves weirdly (I only saw the cursor) then try increasing this to 256M.
  boot.kernelParams = ["cma=32M"];

  # File systems configuration for using the installer's partition layout
  fileSystems = {
    "/" = {
      device = "/dev/disk/by-label/NIXOS_SD";
      fsType = "ext4";
    };
  };

  # !!! Adding a swap file is optional, but strongly recommended!
  swapDevices = [ { device = "/swapfile"; size = 1024; } ];

  hardware.enableRedistributableFirmware = true; # Enable support for Pi firmware blobs

  networking.hostName = "nixosPi";     # Define your hostname.
  networking.wireless.enable = false;  # Toggles wireless support via wpa_supplicant.

  # Select internationalisation properties.
  i18n = {
    consoleFont = "Lat2-Terminus16";
    consoleKeyMap = "us";
    defaultLocale = "en_AU.UTF-8";
  };

  time.timeZone = "Australia/Brisbane"; # Set your preferred timezone:

  # List services that you want to enable:
  services.openssh.enable = true;  # Enable the OpenSSH daemon.

  # Configure users for your Pi:
   users.mutableUsers = false;     # Remove any users not defined in here

  users.users.root = {
    hashedPassword = "$6$eeqJLxwQzMP4l$GTUALgbCfaqR8ut9kQOOG8uXOuqhtIsIUSP.4ncVaIs5PNlxdvAvV.krfutHafrxNN7KzaM7uksr6bXP5X0Sx1";
    openssh.authorizedKeys.keys = [
      "ssh-ed25519 Voohu4vei4dayohm3eeHeecheifahxeetauR4geigh9eTheey3eedae4ais7pei4ruv4 me@myhost"
    ];
  };

  # Groups to add
  users.groups.myusername.gid = 1000;

  # Define a user account.
  users.users.myusername = {
    isNormalUser = true;
    uid = 1000;
    group = "myusername";
    extraGroups = ["wheel" ];
    hashedPassword = "$6$l2I7i6YqMpeviVy$u84FSHGvZlDCfR8qfrgaP.n7/hkfGpuiSaOY3ziamwXXHkccrOr8Md4V5G2M1KcMJQmX5qP7KOryGAxAtc5T60";
    openssh.authorizedKeys.keys = [
      "ssh-ed25519 Voohu4vei4dayohm3eeHeecheifahxeetauR4geigh9eTheey3eedae4ais7pei4ruv4 me@myhost"
    ];
  };

  # This value determines the NixOS release with which your system is to be
  # compatible, in order to avoid breaking some software such as database
  # servers. You should change this only after NixOS release notes say you
  # should.
  system.stateVersion = "19.03"; # Did you read the comment?
  system.autoUpgrade.enable = true;
  system.autoUpgrade.channel = https://nixos.org/channels/nixos-19.03;
}

Once this is copied into place, you only need to rebuild NixOS using it by running:

# nixos-rebuild switch

Now you should have headless Pi 3 which you can use to build CD card images for other Pi 3's that are fully configured and ready to run.