video

October 21st, 2014

When you’ve got a bunch of subsequently numbered png files (anim000.png, anim001.png, …) that you want to convert to a movie, you can use

1
2
3
ffmpeg -i anim%03d.png -an -c:v libx264 -profile:v high -crf 23 -pix_fmt yuv420p movie.mp4
cp anim000.png movie.png
rm anim*.png

In case you forgot to save that first frame as picture for your presentation, you can extract it with

1
mplayer -nosound -frames 1 -vo png:z=9 movie.mp4

Finally if you want to rotate the movie, check http://thinkmoult.com/tech-tip-5-rotate-a-video-by-90-degrees-with-mencoder/

VNC made easy

January 31st, 2014

there are many options how to log into your computer from somewhere else via VNC (for example you are at home and want to check the computer in the office). the following requires x11vnc to be installed on the office computer and vncviewer on the home computer. also, you need to run

1
x11vnc -storepasswd

on your office computer to choose a password that you will need to enter in order to connect to this vnc server.

if you then launch the following script on your home computer, it tunnels via ssh into your office computer and starts the VNC server. 10 seconds later, a second terminal opens which launches the vncviewer on your home computer. make sure you enter the ssh password in the first terminal quickly so that the vnc server is started before vncviewer tries to connect to it. the password in the first terminal is your ssh password, the one in the second terminal is the x11vnc password you set earlier.

obviously, you need to replace <username> and <computername> with the appropriate values.

1
2
3
4
5
#!/usr/bin/env bash
 
gnome-terminal -e "ssh -t -L 5900:localhost:5900 <username>@<computername> 'x11vnc -usepw -scale 0.6 -ncache 0 -localhost -display :0'" &
sleep 10
gnome-terminal -e "vncviewer -encodings 'copyrect tight zrle hextile' localhost:0" &

move home directory to a new computer

September 27th, 2012

to move to a new computer without having to fiddle with the settings of programs etc again, you can:

let’s assume your user on the old computer has the name “myname”. log in on the new computer and make a user with the name “myname” (in my case, it also had the same numerical user ID as on the old computer)

on the old computer:
log in as root on a TTY (make sure nobody else is logged in)

1
2
cd /home
tar -pzcf myname.tar.gz myname

then scp myname.tar.gz to the /home folder of the new computer

on the new computer:
log in as root on a TTY (make sure nobody else is logged in)

1
2
3
cd /home
mv myname myname_orig
tar -xzf myname.tar.gz

and you should have all your settings, files, … on the new computer.

copy/move files to similar names

August 30th, 2012

when you want to copy or move files to slightly different names (e.g. replacing only one letter in the filename), you can use

1
for i in `ls *`; do cp $i `echo $i | sed "s/search/replace/g"`; done

found on http://www.minihowtos.net/copy-and-rename-multiple-files

scp bind warning

June 12th, 2012

when there are bind commands in the .bashrc of the machine that you transfer files to with scp, scp will complain

bind: warning: line editing not enabled

instead of putting the offensive commands into the .bash_profile (which means that they are not executed at all when you are opening a non-login shell), a better solution seems to be to check whether there is a terminal:

1
2
3
4
5
case "$TERM" in
xterm*|rxvt*)
    bind .....
    ;;
esac

found on http://ubuntuforums.org/showpost.php?p=11270810&postcount=14

print shortcut in gnuplot

June 9th, 2012

in order to print the plot that you draw with gnuplot quickly, you can save the following two lines as ~/.gnuplot

1
2
set macros
lpr = "set terminal postscript noenhanced simplex monochrome 'Helvetica' 10; set size 0.4,0.4; set output '|lpr'; replot; set term pop; set size 1.0,1.0; replot;";

after plotting, you can then simply type

1
@lpr

in the gnuplot window

convert text to number in oocalc

January 27th, 2012

when pasting text into the open office / libre office spread sheet application, the pasted numbers are recognised as text. to convert them,

  1. select cells
  2. right click — format cells — number
  3. menu “edit”: find and replace — from .* to & — check regular expressions

found on http://mynthon.net/howto/-/OpenOffice%20-%20Calc%20-%20convert%20text%20to%20number.txt

look up bibliographical information from an arxiv id

October 18th, 2011

this python script takes one or more arxiv ids as input (command line arguments) and gives bibtex entries back which carry the bibliographic information.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
#!/usr/bin/env python
 
# get the arxiv id
import sys
from string import strip, split
for arg in sys.argv[1:]:
    arg = strip(arg)
    arg = strip(arg, chars="arxiv:")
    arg = strip(arg, chars="http://")
    arg = strip(arg, chars="www.")
    arg = strip(arg, chars="arxiv.org/abs/")
    arg = split(arg, sep='v')[0]
    xid = strip(arg)
 
    # download the xml
    import urllib
    from xml.dom import minidom
    usock = urllib.urlopen('http://export.arxiv.org/api/query?id_list='+xid)
    xmldoc = minidom.parse(usock)
    usock.close()
 
    print xmldoc.toxml()
    print ""
 
    d = xmldoc.getElementsByTagName("entry")[0]
 
    date = d.getElementsByTagName("updated")[0].firstChild.data
    text_year = date[:4]
 
    title = d.getElementsByTagName("title")[0]
    text_title = title.firstChild.data#.encode('ascii', 'ignore')
 
    authorlist = []
    first = True
    for person_name in d.getElementsByTagName("author"):
        # get names
        name = person_name.getElementsByTagName("name")[0]
        text_name = name.firstChild.data#.encode('ascii', 'ignore')
        text_given_name = ' '.join(text_name.split()[:-1])
        text_surname = text_name.split()[-1]
        authorlist.append(text_surname+", "+text_given_name)
        #first author?
        if first:
            text_first_author_surname = text_surname
            first = False
 
    # output
 
    print "@MISC{"+text_first_author_surname+text_year[-2:]+","
    print "author = {"+" and ".join(authorlist)+"},"
    print "title = {"+text_title+"},"
    print "year = {"+text_year+"},"
    print "eprint = {"+xid+"},"
    print "URL = {http://www.arxiv.org/abs/"+xid+"},"
    print "}"

count interesting posts in an rss feed with the feed reader canto

October 5th, 2011

the rss feed reader canto can be extended quite easily with own python functions. i’ve given the keys z and x a meaning: z opens the story in the browser and increases the number of interesting stories by one, x does not open the story and increases the number of uninteresting stories by one. the results are saved over different sessions so that you can make a statistic evaluation at the end of the year…

~/.canto/conf.py

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
from canto.extra import *
 
link_handler("firefox \"%u\"")
image_handler("eog \"%u\"", text=True, fetch=True)
filters=[show_unread, None]
never_discard("unread")
colors[1] = ("yellow", "black") # unread
colors[2] = ("blue", "black") # read
 
def my_start_hook(gui):
    import os
    import pickle
    try:
        a = open(os.path.dirname(gui.cfg.path)+'/interesting.py', 'r')
        gui.interest = pickle.load(a)
        a.close()
    except:
        gui.interest = {}
 
start_hook = my_start_hook
 
def my_end_hook(gui):
    import os
    import pickle
    a = open(os.path.dirname(gui.cfg.path)+'/interesting.py', 'w')
    pickle.dump(gui.interest, a)
    a.close()
 
end_hook = my_end_hook
 
def interesting(gui):
    journal = gui.sel["tag"].tag.encode('ascii', 'ignore')
    if journal not in gui.interest:
        gui.interest[journal] = {"interesting": 1, "not interesting": 0}
    else:
        gui.interest[journal]["interesting"] += 1
    gui.goto()
    gui.just_read()
    gui.next_item()
 
keys['z'] = interesting
 
def uninteresting(gui):
    journal = gui.sel["tag"].tag.encode('ascii', 'ignore')
    if journal not in gui.interest:
        gui.interest[journal] = {"interesting": 0, "not interesting": 1}
    else:
        gui.interest[journal]["not interesting"] += 1
    gui.just_read()
    gui.next_item()
 
keys['x'] = uninteresting
 
add("http://feeds2.feedburner.com/DilbertDailyStrip?format=xml", tags=["Dilbert Daily Strip"])
add("http://www.arcamax.com/cgi-bin/news/page/1007/channelfeed", tags=["Garfield"])
add("http://www.phdcomics.com/gradfeed_justcomics.php", tags=["PHD Comics"])

look up bibliographical information from a doi

October 5th, 2011

this python script takes one or more doi as input (command line arguments) and gives bibtex entries back which carry the information provided by crossref. you have to register there and enter the api key they give you in this script (5th line).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
#!/usr/bin/env python
 
debug = False
 
crossref_api_key = 'your_crossref_api_key'
 
# get the doi
import sys
from string import strip
for arg in sys.argv[1:]:
    arg = strip(arg)
    arg = strip(arg, chars="doi:")
    arg = strip(arg, chars="http://")
    arg = strip(arg, chars="dx.doi.org/")
    doi = strip(arg)
 
    # clear from previous
    text_journal_title = ""
    text_year = ""
    text_volume = ""
    text_issue = ""
    text_title = ""
    text_first_author_surname = ""
    text_first_page = ""
    text_last_page = ""
    authorlist = []
 
    # download the xml
    import urllib
    from xml.dom import minidom
    usock = urllib.urlopen('http://www.crossref.org/openurl/?id=doi:'+doi+'&noredirect=true&pid='+crossref_api_key+'&format=unixref')
    xmldoc = minidom.parse(usock)
    usock.close()
 
    if debug:
        print xmldoc.toxml()
    print ""
 
    a = xmldoc.getElementsByTagName("doi_records")[0]
    b = a.getElementsByTagName("doi_record")[0]
    c = b.getElementsByTagName("crossref")[0]
    d = c.getElementsByTagName("journal")[0]
 
    journal_meta = d.getElementsByTagName("journal_metadata")[0]
    journal_title = journal_meta.getElementsByTagName("full_title")[0]
    text_journal_title = journal_title.firstChild.data#.encode('ascii', 'ignore')
 
    journal_issue = d.getElementsByTagName("journal_issue")[0]
    date = journal_issue.getElementsByTagName("publication_date")[0]
    year = date.getElementsByTagName("year")[0]
    text_year = year.firstChild.data#.encode('ascii', 'ignore')
 
    try:
        journal_volume = journal_issue.getElementsByTagName("journal_volume")[0]
        volume = journal_issue.getElementsByTagName("volume")[0]
        text_volume = volume.firstChild.data#.encode('ascii', 'ignore')
    except IndexError:
        pass
 
    try:
        issue = journal_issue.getElementsByTagName("issue")[0]
        text_issue = issue.firstChild.data#.encode('ascii', 'ignore')
    except IndexError:
        pass
 
    journal_article = d.getElementsByTagName("journal_article")[0]
    titles = journal_article.getElementsByTagName("titles")[0]
    title = titles.getElementsByTagName("title")[0]
    text_title = title.firstChild.data#.encode('ascii', 'ignore')
 
    contributors = journal_article.getElementsByTagName("contributors")[0]
    for person_name in contributors.getElementsByTagName("person_name"):
        text_given_name = ""
        text_surname = ""
        # get names
        given_name = person_name.getElementsByTagName("given_name")[0]
        text_given_name = given_name.firstChild.data#.encode('ascii', 'ignore')
        surname = person_name.getElementsByTagName("surname")[0]
        text_surname = surname.firstChild.data#.encode('ascii', 'ignore')
        authorlist.append(text_surname+", "+text_given_name)
        #first author?
        sequence = person_name.attributes.getNamedItem("sequence")
        if sequence.nodeValue == 'first':
            text_first_author_surname = text_surname
 
    try:
        pages = journal_article.getElementsByTagName("pages")[0]
    except:
        pages = None
    try:
        first_page = pages.getElementsByTagName("first_page")[0]
        text_first_page = first_page.firstChild.data#.encode('ascii', 'ignore')
    except:
        pass
    try:
        last_page = pages.getElementsByTagName("last_page")[0]
        text_last_page = last_page.firstChild.data#.encode('ascii', 'ignore')
    except:
        pass
    # physical review
    if pages == None:
        try:
            pages = journal_article.getElementsByTagName("publisher_item")[0]
        except:
            pages = None
        try:
            first_page = pages.getElementsByTagName("item_number")[0]
            text_first_page = first_page.firstChild.data#.encode('ascii', 'ignore')
        except:
            pass
 
    # output
 
    print "@ARTICLE{"+text_first_author_surname+text_year[-2:]+","
    print "author = {"+" and ".join(authorlist)+"},"
    print "title = {"+text_title+"},"
    print "journal = {"+text_journal_title+"},"
    if not text_volume == "":
        print "volume = {"+text_volume+"},"
    if not text_issue == "":
        print "number = {"+text_issue+"},"
    print "year = {"+text_year+"},"
    if ((text_first_page != "") and (text_last_page != "")):
        print "pages = {"+text_first_page+"-"+text_last_page+"},"
    if ((text_first_page != "") and (text_last_page == "")):
        print "pages = {"+text_first_page+"},"
    print "doi = {"+doi+"},"
    print "}"