Welcome to my blog. Have a look at the most recent posts below, or browse the tag cloud on the right. An archive of all posts is also available.
Because of course the system that UNB paid literal millions of dollars for cannot actually tell me if students in my course have passed (with C or above) the single prerequisite course. I could handle either a small number of false positives, but apparently it also has an unknown number of false negatives (missed students) and no way to detect those without checking every student. Which makes this very expensive report useless.
In the unlikely event someone else finds this useful. Or just wants to make fun of my Python.
import csv
import sys
enrolled=set()
passed=set()
record={}
with open (sys.argv[1], 'r') as rosterfile:
rosterreader=csv.DictReader(rosterfile)
for row in rosterreader:
id=row['Student ID']
enrolled.add(id)
record[id]=row
with open ('report.csv', 'w') as outfile:
with open (sys.argv[2], 'r') as historyfile:
historyreader=csv.DictReader(historyfile)
writer = None
for row in historyreader:
if not writer:
fields=row.keys()
writer = csv.DictWriter(outfile,fieldnames=fields)
writer.writeheader()
id=row['StudentID']
grade=row['Final Grade']
if not grade in ['F', 'D', 'C-']:
passed.add(id)
if id in enrolled:
writer.writerow(row)
print("Students missing prerequisite:\n")
for id in enrolled.difference(passed):
row=record[id]
print(f"{row['Student ID']}\t{row['Student Name']}\t{row['Preferred Email']}")
My web pages are (still) in ikiwiki, but lately I have started authoring things like assignments and lectures in org-mode so that I can have some literate programming facilities. There is is org-mode export built-in, but it just exports source blocks as examples (i.e. unhighlighted verbatim). I added a custom exporter to mark up source blocks in a way ikiwiki can understand. Luckily this is not too hard the second time.
(with-eval-after-load "ox-md"
(org-export-define-derived-backend 'ik 'md
:translate-alist '((src-block . ik-src-block))
:menu-entry '(?m 1 ((?i "ikiwiki" ik-export-to-ikiwiki)))))
(defun ik-normalize-language (str)
(cond
((string-equal str "plait") "racket")
((string-equal str "smol") "racket")
(t str)))
(defun ik-src-block (src-block contents info)
"Transcode a SRC-BLOCK element from Org to beamer
CONTENTS is nil. INFO is a plist used as a communication
channel."
(let* ((body (org-element-property :value src-block))
(lang (ik-normalize-language (org-element-property :language src-block))))
(format "[[!format <span class="error">Error: unsupported page format %s</span>]]" lang body)))
(defun ik-export-to-ikiwiki
(&optional async subtreep visible-only body-only ext-plist)
"Export current buffer as an ikiwiki markdown file.
See org-md-export-to-markdown for full docs"
(require 'ox)
(interactive)
(let ((file (org-export-output-file-name ".mdwn" subtreep)))
(org-export-to-file 'ik file
async subtreep visible-only body-only ext-plist)))
See web-stacker for the background.
yantar92
on #org-mode
pointed out that a derived backend would be
a cleaner solution. I had initially thought it was too complicated, but I have to agree the example in the org-mode documentation does
pretty much what I need.
This new approach has the big advantage that the generation of URLs happens at export time, so it's not possible for the displayed program code and the version encoded in the URL to get out of sync.
;; derived backend to customize src block handling
(defun my-beamer-src-block (src-block contents info)
"Transcode a SRC-BLOCK element from Org to beamer
CONTENTS is nil. INFO is a plist used as a communication
channel."
(let ((attr (org-export-read-attribute :attr_latex src-block :stacker)))
(concat
(when (or (not attr) (string= attr "both"))
(org-export-with-backend 'beamer src-block contents info))
(when attr
(let* ((body (org-element-property :value src-block))
(table '(? ?\n ?: ?/ ?? ?# ?[ ?] ?@ ?! ?$ ?& ??
?( ?) ?* ?+ ?, ?= ?%))
(slug (org-link-encode body table))
(simplified (replace-regexp-in-string "[%]20" "+" slug nil 'literal)))
(format "\n\\stackerlink{%s}" simplified))))))
(defun my-beamer-export-to-latex
(&optional async subtreep visible-only body-only ext-plist)
"Export current buffer as a (my)Beamer presentation (tex).
See org-beamer-export-to-latex for full docs"
(interactive)
(let ((file (org-export-output-file-name ".tex" subtreep)))
(org-export-to-file 'my-beamer file
async subtreep visible-only body-only ext-plist)))
(defun my-beamer-export-to-pdf
(&optional async subtreep visible-only body-only ext-plist)
"Export current buffer as a (my)Beamer presentation (PDF).
See org-beamer-export-to-pdf for full docs."
(interactive)
(let ((file (org-export-output-file-name ".tex" subtreep)))
(org-export-to-file 'my-beamer file
async subtreep visible-only body-only ext-plist
#'org-latex-compile)))
(with-eval-after-load "ox-beamer"
(org-export-define-derived-backend 'my-beamer 'beamer
:translate-alist '((src-block . my-beamer-src-block))
:menu-entry '(?l 1 ((?m "my beamer .tex" my-beamer-export-to-latex)
(?M "my beamer .pdf" my-beamer-export-to-pdf)))))
An example of using this in an org-document would as below. The first source code block generates only a link in the output while the last adds a generated link to the normal highlighted source code.
* Stuff
** Frame
#+attr_latex: :stacker t
#+NAME: last
#+BEGIN_SRC stacker :eval no
(f)
#+END_SRC
#+name: smol-example
#+BEGIN_SRC stacker :noweb yes
(defvar x 1)
(deffun (f)
(let ([y 2])
(deffun (h)
(+ x y))
(h)))
<<last>>
#+END_SRC
** Another Frame
#+ATTR_LATEX: :stacker both
#+begin_src smol :noweb yes
<<smol-example>>
#+end_src
The Emacs part is superceded by a cleaner approach
I the upcoming term I want to use KC Lu's web based stacker tool.
The key point is that it takes (small) programs encoded as part of the url.
Yesterday I spent some time integrating it into my existing
org-beamer
workflow.
In my init.el I have
(defun org-babel-execute:stacker (body params)
(let* ((table '(? ?\n ?: ?/ ?? ?# ?[ ?] ?@ ?! ?$ ?& ??
?( ?) ?* ?+ ?, ?= ?%))
(slug (org-link-encode body table))
(simplified (replace-regexp-in-string "[%]20" "+" slug nil 'literal)))
(format "\\stackerlink{%s}" simplified)))
This means that when I "execute" the block below with C-c C-c, it updates the link, which is then embedded in the slides.
#+begin_src stacker :results value latex :exports both
(deffun (f x)
(let ([y 2])
(+ x y)))
(f 7)
#+end_src
#+RESULTS:
#+begin_export latex
\stackerlink{%28deffun+%28f+x%29%0A++%28let+%28%5By+2%5D%29%0A++++%28%2B+x+y%29%29%29%0A%28f+7%29}
#+end_export
The \stackerlink
macro is probably fancier than needed. One could
just use \href
from hyperref.sty
, but I wanted to match the
appearence of other links in my documents (buttons in the margins).
This is based on a now lost answer from stackoverflow.com
;
I think it wasn't this one, but you get the main idea: use \hyper@normalise
.
\makeatletter
% define \stacker@base appropriately
\DeclareRobustCommand*{\stackerlink}{\hyper@normalise\stackerlink@}
\def\stackerlink@#1{%
\begin{tikzpicture}[overlay]%
\coordinate (here) at (0,0);%
\draw (current page.south west |- here)%
node[xshift=2ex,yshift=3.5ex,fill=magenta,inner sep=1pt]%
{\hyper@linkurl{\tiny\textcolor{white}{stacker}}{\stacker@base?program=#1}}; %
\end{tikzpicture}}
\makeatother
Problem description(s)
For some of its cheaper dedicated servers, OVH does not provide a KVM (in the virtual console sense) interface. Sometimes when a virtual console is provided, it requires a horrible java applet that won't run on modern systems without a lot of song and dance. Although OVH provides a few web based ways of installing,
- I prefer to use the debian installer image I'm used to and trust, and
- I needed some way to debug a broken install.
I have only tested this in the OVH rescue environment, but the general approach should work anywhere the rescue environment can install and run QEMU.
QEMU to the rescue
Initially I was horrified by the ovh forums post but eventually I realized it not only gave a way to install from a custom ISO, but provided a way to debug quite a few (but not all, as I discovered) boot problems by using the rescue env (which is an in-memory Debian Buster, with an updated kernel). The original solution uses VNC but that seemed superfluous to me, so I modified the procedure to use a "serial" console.
Preliminaries
- Set up a default ssh key in the OVH web console
- (re)boot into rescue mode
- ssh into root@yourhost (you might need to ignore changing host keys)
- cd /tmp
- You will need qemu (and may as well use kvm).
ovmf
is needed for a UEFI bios.
apt install qemu-kvm ovmf
- Download the netinstaller iso
Download vmlinuz and initrd.gz that match your iso. In my case:
https://deb.debian.org/debian/dists/testing/main/installer-amd64/current/images/cdrom/
Doing the install
- Boot the installer in qemu. Here the system has two hard drives visible as /dev/sda and /dev/sdb.
qemu-system-x86_64 \
-enable-kvm \
-nographic \
-m 2048 \
-bios /usr/share/ovmf/OVMF.fd \
-drive index=0,media=disk,if=virtio,file=/dev/sda,format=raw \
-drive index=1,media=disk,if=virtio,file=/dev/sdb,format=raw \
-cdrom debian-bookworm-DI-alpha2-amd64-netinst.iso \
-kernel ./vmlinuz \
-initrd ./initrd.gz \
-append console=ttyS0,9600,n8
- Optionally follow Debian wiki to configure root on software raid.
- Make sure your disk(s) have an ESP partition.
- qemu and d-i are both using Ctrl-a as a prefix, so you need to C-a C-a 1 (e.g.) to switch terminals
- make sure you install ssh server, and a user account
Before leaving the rescue environment
- You may have forgotten something important, no problem you can boot the disks you just installed in qemu (I leave the apt here for convenient copy pasta in future rescue environments).
apt install qemu-kvm ovmf && \
qemu-system-x86_64 \
-enable-kvm \
-nographic \
-m 2048 \
-bios /usr/share/ovmf/OVMF.fd \
-drive index=0,media=disk,if=virtio,file=/dev/sda,format=raw \
-drive index=1,media=disk,if=virtio,file=/dev/sdb,format=raw \
-nic user,hostfwd=tcp:127.0.0.1:2222-:22 \
-boot c
One important gotcha is that the installer guess interface names based on the "hardware" it sees during the install. I wanted the network to work both in QEMU and in bare hardware boot, so I added a couple of link files. If you copy this, you most likely need to double check the PCI paths. You can get this information, e.g. from udevadm, but note you want to query in rescue env, not in QEMU, for the second case.
/etc/systemd/network/50-qemu-default.link
[Match]
Path=pci-0000:00:03.0
Virtualization=kvm
[Link]
Name=lan0
/etc/systemd/network/50-hardware-default.link
[Match]
Path=pci-0000:03:00.0
Virtualization=no
[Link]
Name=lan0
- Then edit
/etc/network/interfaces
to refer tolan0
Spiffy new terminal emulators seem to come with their own terminfo
definitions. Venerable hosts that I ssh into tend not to know about
those. kitty
comes with a thing to transfer that definition, but it
breaks if the remote host is running tcsh
(don't ask). Similary the
one liner for alacritty
on the arch wiki seems to assume the remote
shell is bash. Forthwith, a dumb shell script that works to send the
terminfo of the current terminal emulator to the remote host.
EDIT: Jakub Wilk worked out this can be replaced with the oneliner
infocmp | ssh $host tic -x -
#!/bin/sh
if [ "$#" -ne 1 ]; then
printf "usage: sendterminfo host\n"
exit 1
fi
host="$1"
filename=$(mktemp terminfoXXXXXX)
cleanup () {
rm "$filename"
}
trap cleanup EXIT
infocmp > "$filename"
remotefile=$(ssh "$host" mktemp)
scp -q "$filename" "$host:$remotefile"
ssh "$host" "tic -x \"$remotefile\""
ssh "$host" rm "$remotefile"
Unfortunately schroot
does not maintain CPU affinity 1. This means in
particular that parallel builds have the tendency to take over an
entire slurm
managed server, which is kindof rude. I haven't had
time to automate this yet, but following demonstrates a simple
workaround for interactive building.
╭─ simplex:~
╰─% schroot --preserve-environment -r -c polymake
(unstable-amd64-sbuild)bremner@simplex:~$ echo $SLURM_CPU_BIND_LIST
0x55555555555555555555
(unstable-amd64-sbuild)bremner@simplex:~$ grep Cpus /proc/self/status
Cpus_allowed: ffff,ffffffff,ffffffff
Cpus_allowed_list: 0-79
(unstable-amd64-sbuild)bremner@simplex:~$ taskset $SLURM_CPU_BIND_LIST bash
(unstable-amd64-sbuild)bremner@simplex:~$ grep Cpus /proc/self/status
Cpus_allowed: 5555,55555555,55555555
Cpus_allowed_list: 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
Next steps
In principle the schroot
configuration parameter can be used to run
taskset before every command. In practice it's a bit fiddly because
you need a shell script shim (because the environment variable) and
you need to e.g. goof around with bind mounts to make sure that your
script is available in the chroot. And then there's combining with
ccache and eatmydata...
Background
So apparently there's this pandemic thing, which means I'm teaching "Alternate Delivery" courses now. These are just like online courses, except possibly more synchronous, definitely less polished, and the tuition money doesn't go to the College of Extended Learning. I figure I'll need to manage share videos, and our learning management system, in the immortal words of Marie Kondo, does not bring me joy. This has caused me to revisit the problem of sharing large files in an ikiwiki based site (like the one you are reading).
My goto solution for large file management is
git-annex. The last time I looked
at this (a decade ago or so?), I was blocked by git-annex
using
symlinks and ikiwiki
ignoring them for security related reasons.
Since then two things changed which made things relatively easy.
I started using the
rsync_command
ikiwiki option to deploy my site.git-annex
went through several design iterations for allowing non-symlink access to large files.
TL;DR
In my ikiwiki config
# attempt to hardlink source files? (optimisation for large files)
hardlink => 1,
In my ikiwiki git repo
$ git annex init
$ git annex add foo.jpg
$ git commit -m'add big photo'
$ git annex adjust --unlock # look ikiwiki, no symlinks
$ ikiwiki --setup ~/.config/ikiwiki/client # rebuild my local copy, for review
$ ikiwiki --setup /home/bremner/.config/ikiwiki/rsync.setup --refresh # deploy
You can see the result at photo
I have lately been using org-mode
literate programming to generate
example code and beamer slides from the same source. I hit a wall
trying to re-use functions in multiple files, so I came up with the
following hack. Thanks 'ngz' on #emacs and Charles Berry on the
org-mode list for suggestions and discussion.
(defun db-extract-tangle-includes ()
(goto-char (point-min))
(let ((case-fold-search t)
(retval nil))
(while (re-search-forward "^#[+]TANGLE_INCLUDE:" nil t)
(let ((element (org-element-at-point)))
(when (eq (org-element-type element) 'keyword)
(push (org-element-property :value element) retval))))
retval))
(defun db-ob-tangle-hook ()
(let ((includes (db-extract-tangle-includes)))
(mapc #'org-babel-lob-ingest includes)))
(add-hook 'org-babel-pre-tangle-hook #'db-ob-tangle-hook t)
Use involves something like the following in your org-file.
#+SETUPFILE: presentation-settings.org
#+SETUPFILE: tangle-settings.org
#+TANGLE_INCLUDE: lecture21.org
#+TITLE: GC V: Mark & Sweep with free list
For batch export with make, I do something like [[!format Error: unsupported page format make]]
What?
I previously posted about my extremely quick-and-dirty buildinfo database using buildinfo-sqlite. This year at DebConf, I re-implimented this using PostgreSQL backend, added into some new features.
There is already buildinfo and buildinfos. I was informed I need to think up a name that clearly distinguishes from those two. Thus I give you builtin-pho.
There's a README for how to set up a local database. You'll need 12GB of disk space for the buildinfo files and another 4GB for the database (pro tip: you might want to change the location of your PostgreSQL data_directory, depending on how roomy your /var is)
Demo 1: find things build against old / buggy Build-Depends
select distinct p.source,p.version,d.version, b.path
from
binary_packages p, builds b, depends d
where
p.suite='sid' and b.source=p.source and
b.arch_all and p.arch = 'all'
and p.version = b.version
and d.id=b.id and d.depend='dh-elpa'
and d.version < debversion '1.16'
Demo 2: find packages in sid without buildinfo files
select distinct p.source,p.version
from
binary_packages p
where
p.suite='sid'
except
select p.source,p.version
from binary_packages p, builds b
where
b.source=p.source
and p.version=b.version
and ( (b.arch_all and p.arch='all') or
(b.arch_amd64 and p.arch='amd64') )
Disclaimer
Work in progress by an SQL newbie.
This wiki is powered by ikiwiki.