This feed contains pages with tag "debian".
Problem description(s)
For some of its cheaper dedicated servers, OVH does not provide a KVM (in the virtual console sense) interface. Sometimes when a virtual console is provided, it requires a horrible java applet that won't run on modern systems without a lot of song and dance. Although OVH provides a few web based ways of installing,
- I prefer to use the debian installer image I'm used to and trust, and
- I needed some way to debug a broken install.
I have only tested this in the OVH rescue environment, but the general approach should work anywhere the rescue environment can install and run QEMU.
QEMU to the rescue
Initially I was horrified by the ovh forums post but eventually I realized it not only gave a way to install from a custom ISO, but provided a way to debug quite a few (but not all, as I discovered) boot problems by using the rescue env (which is an in-memory Debian Buster, with an updated kernel). The original solution uses VNC but that seemed superfluous to me, so I modified the procedure to use a "serial" console.
Preliminaries
- Set up a default ssh key in the OVH web console
- (re)boot into rescue mode
- ssh into root@yourhost (you might need to ignore changing host keys)
- cd /tmp
- You will need qemu (and may as well use kvm).
ovmf
is needed for a UEFI bios.
apt install qemu-kvm ovmf
- Download the netinstaller iso
Download vmlinuz and initrd.gz that match your iso. In my case:
https://deb.debian.org/debian/dists/testing/main/installer-amd64/current/images/cdrom/
Doing the install
- Boot the installer in qemu. Here the system has two hard drives visible as /dev/sda and /dev/sdb.
qemu-system-x86_64 \
-enable-kvm \
-nographic \
-m 2048 \
-bios /usr/share/ovmf/OVMF.fd \
-drive index=0,media=disk,if=virtio,file=/dev/sda,format=raw \
-drive index=1,media=disk,if=virtio,file=/dev/sdb,format=raw \
-cdrom debian-bookworm-DI-alpha2-amd64-netinst.iso \
-kernel ./vmlinuz \
-initrd ./initrd.gz \
-append console=ttyS0,9600,n8
- Optionally follow Debian wiki to configure root on software raid.
- Make sure your disk(s) have an ESP partition.
- qemu and d-i are both using Ctrl-a as a prefix, so you need to C-a C-a 1 (e.g.) to switch terminals
- make sure you install ssh server, and a user account
Before leaving the rescue environment
- You may have forgotten something important, no problem you can boot the disks you just installed in qemu (I leave the apt here for convenient copy pasta in future rescue environments).
apt install qemu-kvm ovmf && \
qemu-system-x86_64 \
-enable-kvm \
-nographic \
-m 2048 \
-bios /usr/share/ovmf/OVMF.fd \
-drive index=0,media=disk,if=virtio,file=/dev/sda,format=raw \
-drive index=1,media=disk,if=virtio,file=/dev/sdb,format=raw \
-nic user,hostfwd=tcp:127.0.0.1:2222-:22 \
-boot c
One important gotcha is that the installer guess interface names based on the "hardware" it sees during the install. I wanted the network to work both in QEMU and in bare hardware boot, so I added a couple of link files. If you copy this, you most likely need to double check the PCI paths. You can get this information, e.g. from udevadm, but note you want to query in rescue env, not in QEMU, for the second case.
/etc/systemd/network/50-qemu-default.link
[Match]
Path=pci-0000:00:03.0
Virtualization=kvm
[Link]
Name=lan0
/etc/systemd/network/50-hardware-default.link
[Match]
Path=pci-0000:03:00.0
Virtualization=no
[Link]
Name=lan0
- Then edit
/etc/network/interfaces
to refer tolan0
Spiffy new terminal emulators seem to come with their own terminfo
definitions. Venerable hosts that I ssh into tend not to know about
those. kitty
comes with a thing to transfer that definition, but it
breaks if the remote host is running tcsh
(don't ask). Similary the
one liner for alacritty
on the arch wiki seems to assume the remote
shell is bash. Forthwith, a dumb shell script that works to send the
terminfo of the current terminal emulator to the remote host.
EDIT: Jakub Wilk worked out this can be replaced with the oneliner
infocmp | ssh $host tic -x -
#!/bin/sh
if [ "$#" -ne 1 ]; then
printf "usage: sendterminfo host\n"
exit 1
fi
host="$1"
filename=$(mktemp terminfoXXXXXX)
cleanup () {
rm "$filename"
}
trap cleanup EXIT
infocmp > "$filename"
remotefile=$(ssh "$host" mktemp)
scp -q "$filename" "$host:$remotefile"
ssh "$host" "tic -x \"$remotefile\""
ssh "$host" rm "$remotefile"
Unfortunately schroot
does not maintain CPU affinity 1. This means in
particular that parallel builds have the tendency to take over an
entire slurm
managed server, which is kindof rude. I haven't had
time to automate this yet, but following demonstrates a simple
workaround for interactive building.
╭─ simplex:~
╰─% schroot --preserve-environment -r -c polymake
(unstable-amd64-sbuild)bremner@simplex:~$ echo $SLURM_CPU_BIND_LIST
0x55555555555555555555
(unstable-amd64-sbuild)bremner@simplex:~$ grep Cpus /proc/self/status
Cpus_allowed: ffff,ffffffff,ffffffff
Cpus_allowed_list: 0-79
(unstable-amd64-sbuild)bremner@simplex:~$ taskset $SLURM_CPU_BIND_LIST bash
(unstable-amd64-sbuild)bremner@simplex:~$ grep Cpus /proc/self/status
Cpus_allowed: 5555,55555555,55555555
Cpus_allowed_list: 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
Next steps
In principle the schroot
configuration parameter can be used to run
taskset before every command. In practice it's a bit fiddly because
you need a shell script shim (because the environment variable) and
you need to e.g. goof around with bind mounts to make sure that your
script is available in the chroot. And then there's combining with
ccache and eatmydata...
What?
I previously posted about my extremely quick-and-dirty buildinfo database using buildinfo-sqlite. This year at DebConf, I re-implimented this using PostgreSQL backend, added into some new features.
There is already buildinfo and buildinfos. I was informed I need to think up a name that clearly distinguishes from those two. Thus I give you builtin-pho.
There's a README for how to set up a local database. You'll need 12GB of disk space for the buildinfo files and another 4GB for the database (pro tip: you might want to change the location of your PostgreSQL data_directory, depending on how roomy your /var is)
Demo 1: find things build against old / buggy Build-Depends
select distinct p.source,p.version,d.version, b.path
from
binary_packages p, builds b, depends d
where
p.suite='sid' and b.source=p.source and
b.arch_all and p.arch = 'all'
and p.version = b.version
and d.id=b.id and d.depend='dh-elpa'
and d.version < debversion '1.16'
Demo 2: find packages in sid without buildinfo files
select distinct p.source,p.version
from
binary_packages p
where
p.suite='sid'
except
select p.source,p.version
from binary_packages p, builds b
where
b.source=p.source
and p.version=b.version
and ( (b.arch_all and p.arch='all') or
(b.arch_amd64 and p.arch='amd64') )
Disclaimer
Work in progress by an SQL newbie.
Emacs
2018-07-23
- NMUed cdargs
- NMUed silversearcher-ag-el
- Uploaded the partially unbundled emacs-goodies-el to Debian unstable
- packaged and uploaded graphviz-dot-mode
2018-07-24
- packaged and uploaded boxquote-el
- uploaded apache-mode-el
- Closed bugs in graphviz-dot-mode that were fixed by the new version.
- filed lintian bug about empty source package fields
2018-07-25
- packaged and uploaded emacs-session
- worked on sponsoring tabbar-el
2018-07-25
- uploaded dh-make-elpa
Notmuch
2018-07-2[23]
Wrote patch series to fix bug noticed by seanw while (seanw was) working working on a package inspired by policy workflow.
2018-07-25
- Finished reviewing a patch series from dkg about protected headers.
2018-07-26
Helped sean w find right config option for his bug report
Reviewed change proposal from aminb, suggested some issues to watch out for.
2018-07-27
- Add test for threading issue.
Nullmailer
2018-07-25
- uploaded nullmailer backport
2018-07-26
- add "envelopefilter" feature to remotes in nullmailer-ssh
Perl
2018-07-23
- Tried to figure out what documented BibTeX syntax is.
- Looked at BibTeX source.
- Ran away.
2018-07-24
- Forwarded #704527 to https://rt.cpan.org/Ticket/Display.html?id=125914
2018-07-25
- Uploaded libemail-abstract-perl to fix Vcs-* urls
- Updated debhelper compat and Standards-Version for libemail-thread-perl
- Uploaded libemail-thread-perl
2018-07-27
- fixed RC bug #904727 (blocking for perl transition)
Policy and procedures
2018-07-22
- seconded #459427
2018-07-23
- seconded #813471
- seconded #628515
2018-07-25
- read and discussed draft of salvaging policy with Tobi
2018-07-26
- Discussed policy bug about short form License and License-Grant
2018-07-27
- worked with Tobi on salvaging proposal
Introduction
Debian is currently collecting buildinfo but they are not very conveniently searchable. Eventually Chris Lamb's buildinfo.debian.net may solve this problem, but in the mean time, I decided to see how practical indexing the full set of buildinfo files is with sqlite.
Hack
First you need a copy of the buildinfo files. This is currently about 2.6G, and unfortunately you need to be a debian developer to fetch it.
$ rsync -avz mirror.ftp-master.debian.org:/srv/ftp-master.debian.org/buildinfo .
Indexing takes about 15 minutes on my 5 year old machine (with an SSD). If you index all dependencies, you get a database of about 4G, probably because of my natural genius for database design. Restricting to debhelper and dh-elpa, it's about 17M.
$ python3 index.py
You need at least
python3-debian
installedNow you can do queries like
$ sqlite3 depends.sqlite "select * from depends where depend='dh-elpa' and depend_version<='0106'"
where 0106 is some adhoc normalization of 1.6
Conclusions
The version number hackery is pretty fragile, but good enough for my current purposes. A more serious limitation is that I don't currently have a nice (and you see how generous my definition of nice is) way of limiting to builds currently available e.g. in Debian unstable.
After a mildly ridiculous amount of effort I made a bootable-usb key.
I then layered a bash script on top of a perl script on top of gpg. What could possibly go wrong?
#!/bin/bash
infile=$1
keys=$(gpg --with-colons $infile | sed -n 's/^pub//p' | cut -f5 -d: )
gpg --homedir $HOME/.caff/gnupghome --import $infile
caff -R -m no "${keys[*]}"
today=$(date +"%Y-%m-%d")
output="$(pwd)/keys-$today.tar"
for key in ${keys[*]}; do
(cd $HOME/.caff/keys/; tar rvf "$output" $today/$key.mail*)
done
The idea is that keys are exported to files on a networked host, the files are processed on an offline host, and the resulting tarball of mail messages sneakernetted back to the connected host.
Umm. Somehow I thought this would be easier than learning about live-build. Probably I was wrong. There are probably many better tutorials on the web.
Two useful observations: zeroing the key can eliminate mysterious grub errors, and systemd-nspawn is pretty handy. One thing that should have been obvious, but wasn't to me is that it's easier to install grub onto a device outside of any container.
Find device
$ dmesg
Count sectors
# fdisk -l /dev/sdx
Assume that every command after here is dangerous.
Zero it out. This is overkill for a fresh key, but fixed a problem with reusing a stick that had a previous live distro installed on it.
# dd if=/dev/zero of=/dev/sdx bs=1048576 count=$count
Where $count
is calculated by dividing the sector count by 2048.
Make file system. There are lots of options. I eventually used parted
# parted
(parted) mklabel msdos
(parted) mkpart primary ext2 1 -1
(parted) set 1 boot on
(parted) quit
Make a file system
# mkfs.ext2 /dev/sdx1
# mount /dev/sdx1 /mnt
Install the base system
# debootstrap --variant=minbase jessie /mnt http://httpredir.debian.org/debian/
Install grub (no chroot needed)
# grub-install --boot-directory /mnt/boot /dev/sdx1
Set a root password
# chroot /mnt
# passwd root
# exit
create up fstab
# blkid -p /dev/sdc1 | cut -f2 -d' ' > /mnt/etc/fstab
Now edit to fix syntax, tell ext2, etc...
Now switch to system-nspawn, to avoid boring bind mounting, etc..
# systemd-nspawn -b -D /mnt
login to the container, install linux-base, linux-image-amd64, grub-pc
EDIT: fixed block size of dd, based on suggestion of tg. EDIT2: fixed count to match block size
(Debian) packaging and Git.
The big picture is as follows. In my view, the most natural way to
work on a packaging project in version control [1] is to have an
upstream branch which either tracks upstream Git/Hg/Svn, or imports of
tarballs (or some combination thereof, and a Debian branch where both
modifications to upstream source and commits to stuff in ./debian
are
added [2]. Deviations from this are mainly motivated by a desire to
export source packages, a version control neutral interchange format
that still preserves the distinction between upstream source and
distro modifications. Of course, if you're happy with the distro
modifications as one big diff, then you can stop reading now gitpkg
$debian_branch $upstream_branch
and you're done. The other easy case
is if your changes don't touch upstream; then 3.0 (quilt)
packages
work nicely with ./debian
in a separate tarball.
So the tension is between my preferred integration style, and making
source packages with changes to upstream source organized in some
nice way, preferably in logical patches like, uh, commits in a
version control system. At some point we may be able use some form of
version control repo as a source package, but the issues with that are
for another blog post. At the moment then we are stuck with
trying bridge the gap between a git repository and a 3.0 (quilt)
source package. If you don't know the details of Debian packaging,
just imagine a patch series like you would generate with git
format-patch
or apply with (surprise) quilt
.
From Git to Quilt.
The most obvious (and the most common) way to bridge the gap between
git and quilt is to export patches manually (or using a helper like
gbp-pq
) and commit them to the packaging repository. This has the
advantage of not forcing anyone to use git or specialized helpers to
collaborate on the package. On the other hand it's quite far from the
vision of using git (or your favourite VCS) to do the integration that
I started with.
The next level of sophistication is to maintain a branch of
upstream-modifying commits. Roughly speaking, this is the approach
taken by git-dpm
, by gitpkg
, and with some additional friction
from manually importing and exporting the patches, by gbp-pq
. There
are some issues with rebasing a branch of patches, mainly it seems to
rely on one person at a time working on the patch branch, and it
forces the use of specialized tools or workflows. Nonetheless, both
git-dpm and gitpkg support this mode of working reasonably well [3].
Lately I've been working on exporting patches from (an immutable) git
history. My initial experiments with marking commits with git notes
more or less worked [4]. I put this on the back-burner for two
reasons, first sharing git notes is still not very well supported by
git
itself [5], and second Gitpkg maintainer Ron Lee convinced me to
automagically pick out what patches to export. Ron's motivation (as I
understand it) is to have tools which work on any git repository
without extra metadata in the form of notes.
Linearizing History on the fly.
After a few iterations, I arrived at the following specification.
The user supplies two refs upstream and head. upstream should be suitable for export as a
.orig.tar.gz
file [6], and it should be an ancestor of head.At source package build time, we want to construct a series of patches that
- Is guaranteed to apply to upstream
- Produces the same work tree as head, outside
./debian
- Does not touch
./debian
- As much as possible, matches the git history from upstream to head.
Condition (4) suggests we want something roughly like git
format-patch
upstream..head, removing those patches which are
only about Debian packaging. Because of (3), we have to be a bit
careful about commits that touch upstream and ./debian
. We also
want to avoid outputting patches that have been applied (or worse
partially applied) upstream. git patch-id
can help identify
cherry-picked patches, but not partial application.
Eventually I arrived at the following strategy.
Use git-filter-branch to construct a copy of the history upstream..head with ./debian (and for technical reasons .pc) excised.
Filter these commits to remove e.g. those that are present exactly upstream, or those that introduces no changes, or changes unrepresentable in a patch.
Try to revert the remaining commits, in reverse order. The idea here is twofold. First, a patch that occurs twice in history because of merging will only revert the most recent one, allowing earlier copies to be skipped. Second, the state of the temporary branch after all successful reverts represents the difference from upstream not accounted for by any patch.
Generate a "fixup patch" accounting for any remaining differences, to be applied before any if the "nice" patches.
Cherry-pick each "nice" patch on top of the fixup patch, to ensure we have a linear history that can be exported to quilt. If any of these cherry-picks fail, abort the export.
Yep, it seems over-complicated to me too.
TL;DR: Show me the code.
You can clone my current version from
git://pivot.cs.unb.ca/gitpkg.git
This provides a script "git-debcherry" which does the history linearization discussed above. In order to test out how/if this works in your repository, you could run
git-debcherry --stat $UPSTREAM
For actual use, you probably want to use something like
git-debcherry -o debian/patches
There is a hook in hooks/debcherry-deb-export-hook
that does this at
source package export time.
I'm aware this is not that fast; it does several expensive operations. On the other hand, you know what Don Knuth says about premature optimization, so I'm more interested in reports of when it does and doesn't work. In addition to crashing, generating multi-megabyte "fixup patch" probably counts as failure.
Notes
This first part doesn't seem too Debian or git specific to me, but I don't know much concrete about other packaging workflows or other version control systems.
Another variation is to have a patched upstream branch and merge that into the Debian packaging branch. The trade-off here that you can simplify the patch export process a bit, but the repo needs to have taken this disciplined approach from the beginning.
git-dpm merges the patched upstream into the Debian branch. This makes the history a bit messier, but seems to be more robust. I've been thinking about trying this out (semi-manually) for gitpkg.
See e.g. exporting. Although I did not then know the many surprising and horrible things people do in packaging histories, so it probably didn't work as well as I thought it did.
It's doable, but one ends up spending about a bunch lines of code on duplicating basic git functionality; e.g. there is no real support for tags of notes.
Since as far as I know quilt has no way of deleting files except to list the content, this means in particular exporting upstream should yield a DFSG Free source tree.
I've been experimenting with a new packaging tool/workflow based on marking certain commits on my integration branch for export as quilt patches. In this post I'll walk though converting the package nauty to this workflow.
Add a control file for the gitpkg export hook, and enable the hook: (the package is already 3.0 (quilt))
% echo ':debpatch: upstream..master' > debian/source/git-patches % git add debian/source/git-patches && git commit -m'add control file for gitpkg quilt export' % git config gitpkg.deb-export-hook /usr/share/gitpkg/hooks/quilt-patches-deb-export-hook
This says that all commits reachable from master but not from upstream should be checked for possible export as quilt patches.
This package was previously maintained in the "recommend topgit style" with the patches checked in on a seperate branch, so grab a copy.
% git archive --prefix=nauty/ build | (cd /tmp ; tar xvf -)
More conventional git-buildpackage style packaging would not need this step.
Import the patches. If everything is perfect, you can use qit quiltimport, but I have several patches not listed in "series", and quiltimport ignores series, so I have to do things by hand.
% git am /tmp/nauty/debian/patches/feature/shlib.diff
Mark my imported patch for export.
% git debpatch +export HEAD
git debpatch list
outputs the followingafb2c20 feature/shlib Export: true makefile.in | 241 +++++++++++++++++++++++++++++++++-------------------------- 1 files changed, 136 insertions(+), 105 deletions(-)
The first line is the subject line of the patch, followed by any notes from debpatch (in this case, just 'Export: true'), followed by a diffstat. If more patches were marked, this would be repeated for each one.
In this case I notice subject line is kindof cryptic and decide to amend.
git commit --amend
git debpatch list
still shows the same thing, which highlights a fundemental aspect of git notes: they attach to commits. And I just made a new commit, sogit debpatch -export afb2c20 git debpatch +export HEAD
Now
git debpatch list
looks ok, so we trygit debpatch export
as a dry run. In debian/patches we have0001-makefile.in-Support-building-a-shared-library-and-st.patch series
That looks good. Now we are not going to commit this, since one of our overall goal is to avoid commiting patches. To clean up the export,
rm -rf debian/patches
gitpkg master
exports a source package, and because I enabled the appropriate hook, I have the following% tar tvf ../deb-packages/nauty/nauty_2.4r2-1.debian.tar.gz | grep debian/patches drwxr-xr-x 0/0 0 2012-03-13 23:08 debian/patches/ -rw-r--r-- 0/0 143 2012-03-13 23:08 debian/patches/series -rw-r--r-- 0/0 14399 2012-03-13 23:08 debian/patches/0001-makefile.in-Support-building-a-shared-library-and-st.patch
Note that these patches are exported straight from git.
I'm done for now so
git push git debpatch push
the second command is needed to push the debpatch notes metadata to the origin. There is a corresponding fetch, merge, and pull commands.
More info
Example package: bibutils In this package, I was already maintaining the upstream patches merged into my master branch; I retroactively added the quilt export.
As of version 0.17, gitpkg ships with a hook called quilt-patches-deb-export-hook. This can be used to export patches from git at the time of creating the source package.
This is controlled by a file debian/source/git-patches.
Each line contains a range suitable for passing to git-format-patch(1).
The variables UPSTREAM_VERSION
and DEB_VERSION
are replaced with
values taken from debian/changelog. Note that $UPSTREAM_VERSION
is
the first part of $DEB_VERSION
An example is
upstream/$UPSTREAM_VERSION..patches/$DEB_VERSION
upstream/$UPSTREAM_VERSION..embedded-libs/$DEB_VERSION
This tells gitpkg to export the given two ranges of commits to debian/patches while generating the source package. Each commit becomes a patch in debian/patches, with names generated from the commit messages. In this example, we get 5 patches from the two ranges.
0001-expand-pattern-in-no-java-rule.patch
0002-fix-dd_free_global_constants.patch
0003-Backported-patch-for-CPlusPlus-name-mangling-guesser.patch
0004-Use-system-copy-of-nauty-in-apps-graph.patch
0005-Comment-out-jreality-installation.patch
Thanks to the wonders of 3.0 (quilt)
packages, these are applied
when the source package is unpacked.
Caveats.
Current lintian complains bitterly about debian/source/git-patches. This should be fixed with the next upload.
It's a bit dangerous if you checkout such package from git, don't read any of the documentation, and build with debuild or something similar, since you won't get the patches applied. There is a proposed check that catches most of such booboos. You could also cause the build to fail if the same error is detected; this a matter of personal taste I guess.
I recently decided to try maintaining a Debian package (bibutils) without committing any patches to Git. One of the disadvantages of this approach is that the patches for upstream are not nicely sorted out in ./debian/patches. I decided to write a little tool to sort out which commits should be sent to upstream. I'm not too happy about the length of it, or the name "git-classify", but I'm posting in case someone has some suggestions. Or maybe somebody finds this useful.
#!/usr/bin/perl
use strict;
my $upstreamonly=0;
if ($ARGV[0] eq "-u"){
$upstreamonly=1;
shift (@ARGV);
}
open(GIT,"git log -z --format=\"%n%x00%H\" --name-only @ARGV|");
# throw away blank line at the beginning.
$_=<GIT>;
my $sha="";
LINE: while(<GIT>){
chomp();
next LINE if (m/^\s*$/);
if (m/^\x0([0-9a-fA-F]+)/){
$sha=$1;
} else {
my $debian=0;
my $upstream=0;
foreach my $word ( split("\x00",$_) ) {
if ($word=~m@^debian/@) {
$debian++;
} elsif (length($word)>0) {
$upstream++;
}
}
if (!$upstreamonly){
print "$sha\t";
print "MIXED" if ($upstream>0 && $debian>0);
print "upstream" if ($upstream>0 && $debian==0);
print "debian" if ($upstream==0 && $debian>0);
print "\n";
} else {
print "$sha\n" if ($upstream>0 && $debian==0);
}
}
}
=pod
=head1 Name
git-classify - Classify commits as upstream, debian, or MIXED
=head1 Synopsis
=over
=item B<git classify> [I<-u>] [I<arguments for git-log>]
=back
=head1 Description
Classify a range of commits (specified as for git-log) as I<upstream>
(touching only files outside ./debian), I<debian> (touching files only
inside ./debian) or I<MIXED>. Presumably these last kind are to be
discouraged.
=head2 Options
=over
=item B<-u> output only the SHA1 hashes of upstream commits (as
defined above).
=back
=head1 Examples
Generate all likely patches to send upstream
git classify -u $SHA..HEAD | xargs -L1 git format-patch -1
What is it?
I was a bit daunted by the number of mails from people signing my gpg keys at debconf, so I wrote a script to mass process them. The workflow, for those of you using notmuch is as follows:
$ notmuch show --format=mbox tag:keysign > sigs.mbox
$ ffac sigs.mbox
where previously I have tagged keysigning emails as "keysign" if I want to import them. You also need to run gpg-agent, since I was too lazy/scared to deal with passphrases.
This will import them into a keyring in ~/.ffac
; uploading is still
manual using something like
$ gpg --homedir=$HOME/.ffac --send-keys $keyid
UPDATE Before you upload all of those shiny signatures, you might
want to use the included script fetch-sig-keys
to add the
corresponding keys to the temporary keyring in ~/.ffac
.
After
$ fetch-sig-keys $keyid
then
$ gpg --homedir ~/.ffac --list-sigs $keyid
should have a UID associated with each signature.
How do I use it
At the moment this is has been tested once or twice by one
person. More testing would be great, but be warned this is pre-release
software until you can install it with apt-get
.
Get the script from
$ git clone git://pivot.cs.unb.ca/git/ffac.git
Get a patched version of Mail::GnuPG that supports
gpg-agent
; hopefully this will make it upstream, but for now,$ git clone git://pivot.cs.unb.ca/git/mail-gnupg.git
I have a patched version of the debian package that I could make available if there was interest.
Install the other dependencies.
# apt-get install libmime-parser-perl libemail-folder-perl
UPDATED
2011/07/29 libmail-gnupg-perl in Debian supports gpg-agent for some time now.
racket (previously known as plt-scheme) is an
interpreter/JIT-compiler/development environment with about 6 years of
subversion history in a converted git repo. Debian packaging has been
done in subversion, with only the contents of ./debian
in version
control. I wanted to merge these into a single git repository.
The first step is to create a repo and fetch the relevant history.
TMPDIR=/var/tmp
export TMPDIR
ME=`readlink -f $0`
AUTHORS=`dirname $ME`/authors
mkdir racket && cd racket && git init
git remote add racket git://git.racket-lang.org/plt
git fetch --tags racket
git config merge.renameLimit 10000
git svn init --stdlayout svn://svn.debian.org/svn/pkg-plt-scheme/plt-scheme/
git svn fetch -A$AUTHORS
git branch debian
A couple points to note:
At some point there were huge numbers of renames when then the project renamed itself, hense the setting for
merge.renameLimit
Note the use of an authors file to make sure the author names and emails are reasonable in the imported history.
git svn creates a branch master, which we will eventually forcibly overwrite; we stash that branch as
debian
for later use.
Now a couple complications arose about upstream's git repo.
Upstream releases seperate source tarballs for unix, mac, and windows. Each of these is constructed by deleting a large number of files from version control, and occasionally some last minute fiddling with README files and so on.
The history of the release tags is not completely linear. For example,
rocinante:~/projects/racket (git-svn)-[master]-% git diff --shortstat v4.2.4 `git merge-base v4.2.4 v5.0`
48 files changed, 242 insertions(+), 393 deletions(-)
rocinante:~/projects/racket (git-svn)-[master]-% git diff --shortstat v4.2.1 `git merge-base v4.2.1 v4.2.4`
76 files changed, 642 insertions(+), 1485 deletions(-)
The combination made my straight forward attempt at constructing a history synched with release tarballs generate many conflicts. I ended up importing each tarball on a temporary branch, and the merges went smoother. Note also the use of "git merge -s recursive -X theirs" to resolve conflicts in favour of the new upstream version.
The repetitive bits of the merge are collected as shell functions.
import_tgz() {
if [ -f $1 ]; then
git clean -fxd;
git ls-files -z | xargs -0 rm -f;
tar --strip-components=1 -zxvf $1 ;
git add -A;
git commit -m'Importing '`basename $1`;
else
echo "missing tarball $1";
fi;
}
do_merge() {
version=$1
git checkout -b v$version-tarball v$version
import_tgz ../plt-scheme_$version.orig.tar.gz
git checkout upstream
git merge -s recursive -X theirs v$version-tarball
}
post_merge() {
version=$1
git tag -f upstream/$version
pristine-tar commit ../plt-scheme_$version.orig.tar.gz
git branch -d v$version-tarball
}
The entire merge script is here. A typical step looks like
do_merge 5.0
git rm collects/tests/stepper/automatic-tests.ss
git add `git status -s | egrep ^UA | cut -f2 -d' '`
git checkout v5.0-tarball doc/release-notes/teachpack/HISTORY.txt
git rm readme.txt
git add collects/tests/web-server/info.rkt
git commit -m'Resolve conflicts from new upstream version 5.0'
post_merge 5.0
Finally, we have the comparatively easy task of merging the upstream
and Debian branches. In one or two places git was confused by all of
the copying and renaming of files and I had to manually fix things up
with git rm
.
cd racket || /bin/true
set -e
git checkout debian
git tag -f packaging/4.0.1-2 `git svn find-rev r98`
git tag -f packaging/4.2.1-1 `git svn find-rev r113`
git tag -f packaging/4.2.4-2 `git svn find-rev r126`
git branch -f master upstream/4.0.1
git checkout master
git merge packaging/4.0.1-2
git tag -f debian/4.0.1-2
git merge upstream/4.2.1
git merge packaging/4.2.1-1
git tag -f debian/4.2.1-1
git merge upstream/4.2.4
git merge packaging/4.2.4-2
git rm collects/tests/stxclass/more-tests.ss && git commit -m'fix false rename detection'
git tag -f debian/4.2.4-2
git merge -s recursive -X theirs upstream/5.0
git rm collects/tests/web-server/info.rkt
git commit -m 'Merge upstream 5.0'
So this is in some sense a nadir for shell scripting. 2 lines that do
something out of 111. Mostly cargo-culted from cowpoke by ron, but
much less fancy. rsbuild foo.dsc
should do the trick.
#!/bin/sh
# Start a remote sbuild process via ssh. Based on cowpoke from devscripts.
# Copyright (c) 2007-9 Ron <ron@debian.org>
# Copyright (c) David Bremner 2009 <david@tethera.net>
#
# Distributed according to Version 2 or later of the GNU GPL.
BUILDD_HOST=sbuild-host
BUILDD_DIR=var/sbuild #relative to home directory
BUILDD_USER=""
DEBBUILDOPTS="DEB_BUILD_OPTIONS=\"parallel=3\""
BUILDD_ARCH="$(dpkg-architecture -qDEB_BUILD_ARCH 2>/dev/null)"
BUILDD_DIST="default"
usage()
{
cat 1>&2 <<EOF
rsbuild [options] package.dsc
Uploads a Debian source package to a remote host and builds it using sbuild.
The following options are supported:
--arch="arch" Specify the Debian architecture(s) to build for.
--dist="dist" Specify the Debian distribution(s) to build for.
--buildd="host" Specify the remote host to build on.
--buildd-user="name" Specify the remote user to build as.
The current default configuration is:
BUILDD_HOST = $BUILDD_HOST
BUILDD_USER = $BUILDD_USER
BUILDD_ARCH = $BUILDD_ARCH
BUILDD_DIST = $BUILDD_DIST
The expected remote paths are:
BUILDD_DIR = $BUILDD_DIR
sbuild must be configured on the build host. You must have ssh
access to the build host as BUILDD_USER if that is set, else as the
user executing cowpoke or a user specified in your ssh config for
'$BUILDD_HOST'. That user must be able to execute sbuild.
EOF
exit $1
}
PROGNAME="$(basename $0)"
version ()
{
echo \
"This is $PROGNAME, version 0.0.0
This code is copyright 2007-9 by Ron <ron@debian.org>, all rights reserved.
Copyright 2009 by David Bremner <david@tethera.net>, all rights reserved.
This program comes with ABSOLUTELY NO WARRANTY.
You are free to redistribute this code under the terms of the
GNU General Public License, version 2 or later"
exit 0
}
for arg; do
case "$arg" in
--arch=*)
BUILDD_ARCH="${arg#*=}"
;;
--dist=*)
BUILDD_DIST="${arg#*=}"
;;
--buildd=*)
BUILDD_HOST="${arg#*=}"
;;
--buildd-user=*)
BUILDD_USER="${arg#*=}"
;;
--dpkg-opts=*)
DEBBUILDOPTS="DEB_BUILD_OPTIONS=\"${arg#*=}\""
;;
*.dsc)
DSC="$arg"
;;
--help)
usage 0
;;
--version)
version
;;
*)
echo "ERROR: unrecognised option '$arg'"
usage 1
;;
esac
done
dcmd rsync --verbose --checksum $DSC $BUILDD_USER$BUILDD_HOST:$BUILDD_DIR
ssh -t $BUILDD_HOST "cd $BUILDD_DIR && $DEBBUILDOPTS sbuild --arch=$BUILDD_ARCH --dist=$BUILDD_DIST $DSC"
I am currently making a shared library out of some existing C code, for eventual inclusion in Debian. Because the author wasn't thinking about things like ABIs and APIs, the code is not too careful about what symbols it exports, and I decided clean up some of the more obviously private symbols exported.
I wrote the following simple script because I got tired of running grep by hand. If you run it with
grep-symbols symbolfile *.c
It will print the symbols sorted by how many times they occur in the other arguments.
#!/usr/bin/perl
use strict;
use File::Slurp;
my $symfile=shift(@ARGV);
open SYMBOLS, "<$symfile" || die "$!";
# "parse" the symbols file
my %count=();
# skip first line;
$_=<SYMBOLS>;
while(<SYMBOLS>){
chomp();
s/^\s*([^\@]+)\@.*$/$1/;
$count{$_}=0;
}
# check the rest of the command line arguments for matches against symbols. Omega(n^2), sigh.
foreach my $file (@ARGV){
my $string=read_file($file);
foreach my $sym (keys %count){
if ($string =~ m/\b$sym\b/){
$count{$sym}++;
}
}
}
print "Symbol\t Count\n";
foreach my $sym (sort {$count{$a} <=> $count{$b}} (keys %count)){
print "$sym\t$count{$sym}\n";
}
- Updated Thanks to Peter Pöschl for pointing out the file slurp should not be in the inner loop.
So, a few weeks ago I wanted to play play some music. Amarok2 was only playing one track at time. Hmm, rather than fight with it, maybe it is time to investigate alternatives. So here is my story. Mac using friends will probably find this amusing.
minirok segfaults as soon I try to do something #544230
bluemingo seems to only understand mp3's
exaile didn't play m4a (these are ripped with faac, so no DRM) files out of the box. A small amount of googling didn't explain it.
mpd looks cool, but I didn't really want to bother with that amount of setup right now.
Quod Libet also seems to have some configuration issues preventing it from playing m4a's
I hate the interface of Audacious
mocp looks cool, like mpd but easier to set up, but crashes trying to play an m4a file. This looks a lot like #530373
qmmp + xmonad = user interface fail.
juk also seems not to play (or catalog) my m4a's
In the end I went back and had a second look at mpd, and I'm pretty happy with it, just using the command line client mpc right now. I intend to investigate the mingus emacs client for mpd at some point.
An emerging theme is that m4a on Linux is pain.
UPDATED It turns out that one problem was I needed gstreamer0.10-plugins-bad and gstreamer0.10-plugins-really-bad. The latter comes from debian-multimedia.org, and had a file conflict with the former in Debian unstable (bug #544667 apparently just fixed). Grabbing the version from testing made it work. This fixed at least rhythmbox, exhaile and quodlibet. Thanks to Tim-Philipp Müller for the solution.
I guess the point I missed at first was that so many of the players use gstreamer as a back end, so what looked like many bugs/configuration-problems was really one. Presumably I'd have to go through a similar process to get phonon working for juk.
So I had this brainstorm that I could get sticky labels approximately the right size and paste current gpg key info to the back of business cards. I played with glabels for a bit, but we didn't get along. I decided to hack something up based on the gpg-key2ps script in the signing-party package. I'm not proud of the current state; it is hard-coded for one particular kind of labels I have on my desk, but it should be easy to polish if anyone thinks this is and idea worth pursuing. The output looks like this. Note that the boxes are just for debugging.
There have been several posts on Planet Debian planet lately about Netbooks. Biella Coleman pondered the wisdom of buying a Lenovo IdeaPad S10, and Russell talked about the higher level question of what kind of netbook one should buy.
I'm currently thinking of buying a netbook for my wife to use in her continuing impersonation of a student. So, to please Russell, what do I care about?
- Comfortably running
- emacs
- latex
- openoffice
- iceweasel
- vlc
- Debian support
- a keyboard my wife can more or less touch type on.
- a matte screen
- build quality
I think a 10" model is required to get a decentish keyboard, and a hard-drive would be just easier when she discovers that another 300M of diskspaced is needed for some must-have application. I realize in Tokyo and Seoul they probably call these "desktop replacements", but around here these are aparently "netbooks" :)
Some options I'm considering (prices are in Canadian dollars, before taxes). Unless I missed something, these are all Intel Atom N270/N280, 160G HD, 1G RAM.
| Mfr | Model | Price | Mass (6 cell) | Pros | Cons |
| Lenovo | S10-2 | $400 | 1.22 | build quality, | wireless is a bit of a hassle (1) |
| | S10e | $400 | 1.38 | express card slot | |
| Dell | Mini 10v | $350 | 1.33 | price | memory upgrade is a hassle (2) |
| Asus | 1000HE | $450 | 1.45kg | well supported in debian (3), N280 | price |
| Asus | 1005HA | $450 | 1.27kg | wireless OK, ore likes the keyboard, | wired ethernet is very new (4), price |
| MSI | U100 | $400 | 1.3kg | "just works" according debian wiki | availability (5) |
Currently needs non-free driver broadcom-sta. On the other hand, the broadcom-sta maintainer has one. Also, bt43 is supposed to support them pretty soonish.
There are web pages describing how, but it looks like it probably voids your warranty, since you have to crack the case open.
I don't know if the driver situation is so much better (since asus switches chipsets within the same model), but there is an active group of people using Debian on these machines.
Very new as in currently needs patches to the Linux kernel.
These seem to be end-of-lifed; stock is very limited. Price is for a six cell battery for better comparison; 9-cell is about $50 more.
So I have been getting used to madduck's workflow for topgit and debian packaging, and one thing that bugged me a bit was all the steps required to to build. I tend to build quite a lot when debugging, so I wrote up a quick and dirty script to
- export a copy of the master branch somewhere
- export the patches from topgit
- invoke debuild
I don't claim this is anywhere ready production quality, but maybe it helps someone.
Assumptions (that I remember)
- you use the workflow above
- you use pristine tar for your original tarballs
- you invoke the script (I call it tg-debuild) from somewhere in your work tree
Here is the actual script:
#!/bin/sh
set -x
if [ x$1 = x-k ]; then
keep=1
else
keep=0
fi
WORKROOT=/tmp
WORKDIR=`mktemp -d $WORKROOT/tg-debuild-XXXX`
# yes, this could be nicer
SOURCEPKG=`dpkg-parsechangelog | grep ^Source: | sed 's/^Source:\s*//'`
UPSTREAM=`dpkg-parsechangelog | grep ^Version: | sed -e 's/^Version:\s*//' -e s/-[^-]*//`
ORIG=$WORKDIR/${SOURCEPKG}_${UPSTREAM}.orig.tar.gz
pristine-tar checkout $ORIG
WORKTREE=$WORKDIR/$SOURCEPKG-$UPSTREAM
CDUP=`git rev-parse --show-cdup`
GDPATH=$PWD/$CDUP/.git
DEST=$PWD/$CDUP/../build-area
git archive --prefix=$WORKTREE/ --format=tar master | tar xfP -
GIT_DIR=$GDPATH make -C $WORKTREE -f debian/rules tg-export
cd $WORKTREE && GIT_DIR=$GDPATH debuild
if [ $?==0 -a -d $DEST ]; then
cp $WORKDIR/*.deb $WORKDIR/*.dsc $WORKDIR/*.diff.gz $WORKDIR/*.changes $DEST
fi
if [ $keep = 0 ]; then
rm -fr $WORKDIR
fi
Scenario
You are maintaining a debian package with topgit. You have a topgit patch against version k and it is has been merged into upstream version m. You want to "disable" the topgit branch, so that patches are not auto-generated, but you are not brave enough to just
tg delete feature/foo
You are brave enough to follow the instructions of a random blog post.
Checking your patch has really been merged upstream
This assumes that you tags upstream/j for version j.
git checkout feature/foo
git diff upstream/k
For each file foo.c modified in the output about, have a look at
git diff upstream/m foo.c
This kindof has to be a manual process, because upstream could easily have modified your patch (e.g. formatting).
The semi-destructive way
Suppose you really never want to see that topgit branch again.
git update-ref -d refs/topbases/feature/foo
git checkout master
git branch -M feature/foo merged/foo
The non-destructive way.
After I worked out the above, I realized that all I had to do was make an explicit list of topgit branches that I wanted exported. One minor trick is that the setting seems to have to go before the include, like this
TG_BRANCHES=debian/bin-makefile debian/libtoolize-lib debian/test-makefile
-include /usr/share/topgit/tg2quilt.mk
Conclusions
I'm not really sure which approach is best yet. I'm going to start with the non-destructive one and see how that goes.
Updated Madduck points to a third, more sophisticated approach in Debian BTS.
I wanted to report a success story with topgit which is a rather new patch queue managment extension for git. If that sounds like gibberish to you, this is probably not the blog entry you are looking for.
Some time ago I decided to migrate the debian packaging of bibutils to topgit. This is not a very complicated package, with 7 quilt patches applied to upstream source. Since I don't have any experience to go on, I decided to follow Martin 'madduck' Krafft's suggestion for workflow.
It all looks a bit complicated (madduck will be the first to agree),
but it forced me to think about which patches were intended to go
upstream and which were not. At the end of the conversion I had 4
patches that were cleanly based on upstream, and (perhaps most
importantly for lazy people like me), I could send them upstream with
tg mail
. I did that, and a few days later, Chris Putnam sent me a
new upstream release incorporating all of those patches. Of course, now I have
to package this new upstream release :-).
The astute reader might complain that this is more about me developing
half-decent workflow, and Chris being a great guy, than about any
specific tool. That may be true, but one thing I have discovered
since I started using git
is that tools that encourage good workflow
are very nice. Actually, before I started using git, I didn't even use
the word workflow
. So I just wanted to give a public thank you to
pasky for writing topgit and to madduck for pushing it into debian,
and thinking about debian packaging with topgit.
I have been thinking about ways to speed multiple remote git on the same hosts. My starting point is mr, which does the job, but is a bit slow. I am thinking about giving up some generality for some speed. In particular it seems like it ought to be possible to optimize for the two following use cases:
- many repos are on the same host
- mostly nothing needs updating.
For my needs, mr
is almost fast enough, but I can see it getting
annoying as I add repos (I currently have 11, and mr update
takes
about 5 seconds; I am already running ssh multiplexing).
I am also thinking about the needs of the Debian
Perl Modules Team, which would have over
900 git repos if the current setup was converted to one git repo per
module.
My first attempt, using perl module Net::SSH::Expect to keep an ssh channel open can be scientifically classified as "utter fail", since Net::SSH::Expect takes about 1 second to round trip "/bin/true".
Initial experiments using IPC::PerlSSH are more promising. The following script grabs the head commit in 11 repos in about 0.5 seconds. Of course, it still doesn't do anything useful, but I thought I would toss this out there in case there already exists a solution to this problem I don't know about.
#!/usr/bin/perl use IPC::PerlSSH; use Getopt::Std; use File::Slurp; my %config; eval( "\%config=(".read_file(shift(@ARGV)).")"); die "reading configuration failed: $@" if $@; my $ips= IPC::PerlSSH->new(Host=>$config{host}); $ips->eval("use Git"); $ips->store( "ls_remote", q{my $repo=shift; return Git::command_oneline('ls-remote',$repo,'HEAD'); } ); foreach $repo (@{$config{repos}}){ print $ips->call("ls_remote",$repo); }
P.S. If you google for "mr joey hess", you will find a Kiss tribute band called Mr. Speed, started by Joe Hess"
P.P.S. Hello planet debian!
So I spent a couple hours editing Haskell. So of course I had to spend at least that long customizing emacs. My particular interest is in so called literate haskell code that intersperses LaTeX and Haskell.
The first step is to install haskell-mode and mmm-mode.
apt-get install haskell-mode mmm-mode
Next, add something like the following to your .emacs
(load-library "haskell-site-file")
;; Literate Haskell [requires MMM]
(require 'mmm-auto)
(require 'mmm-haskell)
(setq mmm-global-mode 'maybe)
(add-to-list 'mmm-mode-ext-classes-alist
'(latex-mode "\\.lhs$" haskell))
Now I want to think about these files as LaTeX with bits of Haskell in them, so
I tell auctex that .lhs
files belong to it (also in .emacs)
(add-to-list 'auto-mode-alist '("\\.lhs\\'" . latex-mode))
(eval-after-load "tex"
'(progn
(add-to-list 'LaTeX-command-style '("lhs" "lhslatex"))
(add-to-list 'TeX-file-extensions "lhs")))
In order that the
\begin{code}
\end{code}
environment is typeset nicely, I want any latex file that uses the
style lhs
to be processed with the script lhslatex
(hence the
messing with LaTeX-command-style
). At the moment I just have an
empty lhs.sty
, but in principle it could contain useful definitions,
e.g. the output from lhs2TeX on a file containing only
%include polycode.fmt
The current version of lhslatex is a bit crude. In particular it assumes you want to run pdflatex.
The upshot is that you can use AUCTeX mode in the LaTeX part of the buffer
(i.e. TeX your buffer) and haskell mode in the \begin{code}\end{code}
blocks
(i.e. evaluate the same buffer as Haskell).
To convert an svn repository containing only "/debian" to something compatible with git-buildpackage, you need to some work. Luckily zack already figured out how.
#
package=bibutils
version=3.40
mkdir $package
cd $package
git-svn init --stdlayout --no-metadata svn://svn.debian.org/debian-science/$package
git-svn fetch
# drop upstream branch from svn
git-branch -d -r upstream
# create a new upstream branch based on recipe from zack
#
git-symbolic-ref HEAD refs/heads/upstream
git rm --cached -r .
git commit --allow-empty -m 'initial upstream branch'
git checkout -f master
git merge upstream
git-import-orig --pristine-tar --no-dch ../tarballs/${package}_${version}.orig.tar.gz
If you forget to use --authors-file=file
then you can fix up your
mistakes later with something like the following. Note that after
some has cloned your repo, this makes life difficult for them.
#!/bin/sh
project=vrr
name="David Bremner"
email="bremner@unb.ca"
git clone alioth.debian.org:/git/debian-science/packages/$project $project.new
cd $project.new
git branch upstream origin/upstream
git branch pristine-tar origin/pristine-tar
git-filter-branch --env-filter "export GIT_AUTHOR_EMAIL='bremner@unb.ca' GIT_AUTHOR_NAME='David Bremner'" master upstream pristine-tar
I am in the process of migrating (to git) some debian packages from a subversion repository created with svn-inject -l 2, namely
/trunk/package
/branches/upstream/package/
/tags/package
Here is a script I wrote that seems to do the trick
#!/bin/sh
package=$1
stage=$1.from-svn
set -x
# my debian packages live under $SVNROOT/debian, with layout 2
mkdir $stage
cd $stage
git-svn init --no-metadata \
--trunk $SVNROOT/debian/trunk/$package \
--branches $SVNROOT/debian/branches/upstream/$package \
--tags $SVNROOT/debian/tags/$package
git-svn fetch
git branch -r upstream current
cd ..
# git clone --bare loses some gunk from git-svn. Anyway we need a bare repo
git clone --bare $stage $1.git
rm -rf $stage
Your mileage may vary of course.
UPDATED Apparently 'git branch -r upstream current' no longer works, if it ever did. If anyone can psychically figure out what I wanted to do there, I'm happy to translate that into git.
So you have a pdf form, and you want to fill it in on linux. You hate acrobat reader. Ok, so all six of you read on.
First install pdftk. If you are using debian,
apt-get install pdftk
If you are not using debian, first install debian :-).
Now you need a pdf file with form data. We suppose for the sake of
argument that your file is foo.pdf
. Try
pdftk foo.pdf dump_data_fields
Yes, the order of arguments is goofy. You should get some output that looks like
FieldType: Text
FieldName: M3
FieldFlags: 4194304
FieldJustification: Left
---
FieldType: Text
FieldName: D3
FieldFlags: 4194304
FieldJustification: Left
M3 and D3 are your field names. Now get my script which can convert this output into something useful. At this point you may want to reconsider how much you hate acrobat. Or investigate okular. Assuming you are still here, run
pdftk foo.pdf dump_data_fields | perl fields2pl.pl > foo.pl
This will give you a template that you can fill in. If you have to
fill out the same form many times (e.g. an expense form), save this
template somewhere. Now to fill in your form, you need a FDF
file.
One way to make one is to edit the template I made you create above,
and then convert it to FDF
. First install the FDF
converter.
apt-get install libpdf-fdf-simple-perl
Now use something like genfdf.pl to make an fdf file.
perl genfdf.pl foo.pl > foo.fdf
You are almost there. To actually fill in the form, you use the command
pdftk foo.pdf fill_form foo.fdf output filled.pdf
If you do this all many times, consider making a Makefile. Here is a fragment
.SUFFIXES: .pdf .fdf .csv .gnumeric .pl
.fdf.pdf:
pdftk Expenses.pdf fill_form $< output $@
.pl.fdf:
genfdf.pl $< > $@
example.pdf: example.fdf
example.fdf: example.pl
I wanted to try out raw image processing with my Canon EOS 350D (Rebel XT). I have been using digikam for about a year now to edit and sort my photos, so I wanted to see if its (new) raw handling was usable.
The first thing I noticed was that handling of raw images in the viewer/editor was a little rough around the edges in the current release 0.9.2. A comment in the kde bug tracker suggested that some of these bugs were fixed in svn, so off I went.
Compiling from SVN went relatively smoothly. To avoid the hassles of
having KDE pieces in /usr/local, I used the nifty (when it works)
checkinstall to build debian packages. Because I have one debian
package for all of the libs and one for digikam, this conflicts with
several existing debian packages. Not to fear, just keep removing
things and try dpkg -i
again.
The next gotcha is that graphicsmagick on debian does not currently grok 16-bit (per channel) PNG images (Debian Bug). A workaround is to install imagemagick instead.
In our next exciting episode, I discuss workflow.
Another sad software story, with a happy ending.
pushmi provides a read/write mirror of an svn repository.
My repository is
svn+ssh://
, which normally works fine with ssh-agent.Unfortunately
pushmi
relies on svn hooks to update the master repository, and svn in its wisdom does not preserve any environment variables, thus making ssh-agent not work very well.- A solution is to install the package
keychain
, which writes the ssh-agent environment variables out to disk (this sounds trivial, but it also finds the running ssh-agent without any configuration). to both $REPOSITORY/hooks/pre-commit and $REPOSITORY/hooks/post-commit, add the following snippet to the beginning of the script
KEYCHAINFILE=/home/bremner/.keychain/`hostname`-sh if [ -f $KEYCHAINFILE ]; then . $KEYCHAINFILE fi
You might need to adjust the path if you are not me.
UPDATED You need the option
--inherit local
or--inherit all
to keychain to get it to overwrite the previous version of its files.
I used the script by Franklin Piat to build a debian package from unreleased alsa sources. Works great once I remember the hardware volume control buttons. Doh!. A big shout-out to Franklin for what seems like a very clean solution.
Update
As of version 1.0.15-2 I am using the default alsa-modules source package, compiled via module-assistant
Suspend to disk works ok out of the box, if you type (as root)
echo -n disk > /sys/power/state
I like the command line and all, but somehow this is a bit tedious. To get Fn-F12 working, one has to first load the thinkpad-acpi module
modprobe thinkpad-acpi
Next, one has to enable the hotkey subdriver
echo enable > /proc/acpi/ibm/hotkey
These two steps can be automated by creating /etc/acpi/start.d/40-thinkpad-acpi.sh Finally, workaround a problem with the default /etc/acpi/hibernatebtn.sh. For some reason, acpi_fakekey is not doing its magic thing, so I replaced it with a call to hibernate.sh. There are already some bug reports about this in debian. Now that we enabled thinkpad-acpi hotkeys, we have to the same hack to /etc/acpi/sleepbtn.sh
Be warned, this will may not play nicely with the kde/gnome gui suspend/resume tools
First install the firmware:
apt-get install firmware-iwlwifi
Next, rebuild the kernel and reboot (this is not strictly necessary, but it is what I did; other people can tell you other ways), so that you have the source for the kernel you are currently running available.
Download the latest kernel module source from intellwireless.org
Unpack, make (probably twice) su, make install
Update
To avoid recompiling your kernel, you can follow the steps from Stuart Prescott
Out of the box, it will go to sleep (suspend to ram)
when you hit Fn-F4, but on wakeup the backlight.
Based on a Ubuntu bug report (which I've now lost track of) that mentioned
switching virtual terminals turned the backlight back on, I
created /etc/apci/resume.d/99-switchvt.sh to automate that
Not the most beautiful solution in the world, but it works. I have
submitted a
[[http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=439914][debian
bug]]
- UPDATED Another workaround is to add the kernel parameter
- acpi_sleep=s3_bios
(lenny/sid) on a thinkpad x61. Initially I was running the stock 2.6.22-1-686 kernel; to get wifi going I decided to rebuild the kernel. I have installed the acpi-support package, which then requires some hacking. The various issues are tagged x61