Recent Updates Toggle Comment Threads | Keyboard Shortcuts

  • simon 12:12 on 21 March 2016 Permalink | Reply  

    show lots of images with their title in one image 

    Using imagemagick’s montage command, it’s possible to show lots of images in one big compiled image, with or without their filename as title…

    Example:

    montage -label '%t' -geometry 120x180+5+5 *png page.png

    if you leave out “-label %t’, the images appear without their title.
    Here I specify to use all .png files, and output one page.png.

    Lots more options avaiable, see montage example

     
  • simon 19:42 on 28 September 2015 Permalink | Reply  

    openoffice is dead, long live libreoffice 

    Somebody on Slashdot says so:

    But it’s probably true, as lwn.net points out this as well:

    So go for LibreOffice for all your documents formatted in open standards.

    For those who like proprietary more, try LaTex (it’s kind of open too 😉

    /Simon

     
  • simon 20:38 on 27 December 2014 Permalink | Reply  

    enabling higher wifi channels on openwrt 

    In order to tweak openwrt to allow usage of the 2.4 GHz channels 12-13 I used this hack: http://luci.subsignal.org/~jow/reghack/

     
  • simon 20:26 on 27 December 2014 Permalink | Reply  

    OpenWRT as Access Point (using luci) 

    After a bit of searching and messing about, I figured out a really easy way to configure a tp-link archer c5 as a wireless accesspoint (2.4GHz and 5GHz) using (mostly) the luci web interface:

    • default openwrt install (as a router)
    • enable and configure the wireless as desired
    • go to the Network -> Interfaces page in luci
    • edit the br-lan (the side that serves the LAN and wlan interfaces)

      • disable dhcpd and ipv6 server functionality
      • configure a valid static network (ipv4) address on the interface for your existing network (also configure the dns server to be your router, etc.)
      • under physical settings, make sure bridge interfaces is on (it was by default) and check all the interfaces to be bridged (I didn’t check the custom interface, as I don’t think I have it)
      • optionally configure the STP protocol, if you expect switches might be connected.

    After I did all this, I applied and saved the changes.
    Reboot and done.

    Before I started, I was fully prepared to use ssh and vi to configure all this manually in the configuration files, but this was unnecessary.

     
  • simon 14:39 on 19 August 2013 Permalink | Reply
    Tags: git, puppet   

    Cooperating on a Puppet config using Git 

    Assuming you have a puppet configuration as set up according to “development, testing, production” like in Environments en git, een must (Dutch). Basically it’s a setup with three environments based on a single repository.

    The article uses no branches in git, which is a shame, also a single user is implied (root).

    In this post I’ll attempt to expand this idea to include both, in order to facilitate multiple sysadmins to work on puppet changes and enhancements without bothering each other in the process.

    Multiple users

    Being a distributed VCS, Git is written for facilitating multiple users to work on (a clone of) the same repository. The workflow that works best for tracking changes by multiple users is to have users clone a repository, work on changes under their own name and when the time comes to exchange the changes, push or pull either to a central or anothers clone of the repository.

    If e.g. Chantal wants to work on a specific feature, she can make a feature branch in her clone, work on the feature and push the feature branche back to the origin (the central repo). Or work it out completely and merge it into her master branch and push that back.

    The problem is actually a bit more complicated, because we’re working with a puppet configuration. Testing it requires a puppet master and puppet client hosts. What if we could test the branch, without merging it into the main puppet environments (development, testing, production)?

    Expanding the three stage puppet config model

    In the article mentioned, changes are first developed and tested in the development environment, this requires a specific test machine that nobody relies on for work. When a feature works, the developer pushes the changes to the central repo. then it can be further tested in the testing environment, e.g. on less important servers. This happens when a “git pull” command is done in the clone called testing.
    Finally, when the tests are all OK, the changes are merged in the production environment and all agents working in production environment get the new changes.

    This doesn’t involve branches and if multiple admins work on new features in develop, they likely interfere with each other and stuff will break!

    Now I’m not yet very familiar with puppet, but I understand it is possible to create any number of environments which can be referred to on the clients via their puppet agent configuration.

    In general it would have been useful to use branches in the original model so that e.g. master is tracked only on the production environment, testing only in the testing environment and development only in the development environment. Any commits done on the development branch cannot accidentally end up in the testing or production branch/environment without an explicit merge!

    Once we have these branches, what’s to stop us from adding more branches and corresponding puppet-environments to test the puppet code while developing it, without interfering with either production or other admins working on different features? Nothing!

    For each feature branch we want to test, a new entry must be created in /etc/puppet/puppet.conf to add an environment. The new environment should be in the same base directory as the normal branches, since hiera configuration cannot handle multiple locations for the hiera data.

    /etc/puppet/hiera.yaml:

    ...
    :datadir: /etc/puppet/environments/%{environment}/hieradata

    /etc/puppet/puppet.conf:

    ...
    [chantalfeature]
    manifest = /etc/puppet/environments/puppetfeature/manifests/site.pp
    modulepath = /etc/puppet/environments/puppetfeature/modules

    All Chantal then has to do is to have a testing server look at the chantalfeature environment while she’s developing it.

    When the feature is ready, it can be merged into testing:

    /etc/puppet/environments/puppetfeature/$ git checkout testing
    /etc/puppet/environments/puppetfeature/$ git fetch+merge #assuming testing is a tracking branch for origin/testing
    /etc/puppet/environments/puppetfeature/$ git merge --no-ff chantalfeature
    /etc/puppet/environments/puppetfeature/$ git push

    To get the changes in the real testing environment, that clone needs to be updated as well…

    And then the usual progression to production can follow…. Well, “usual”, when working with branches for the different environments, there’s a slightly different command flow for Git.

    in /etc/puppet/environments/production
    $ git fetch
    $ git merge --no-ff origin/testing

    Conclusion

    It looks like Development as a separate environment is not needed anymore, this is taken over by the feature branches and corresponding environments. These branches and environments are temporary and can be deleted after their contents has been integrated in testing.

    NB, so far this is a theoretical excercise, I’ll post an update when I’ve had some time testing this model…

    update: 4 sept 2013:
    I’ve updated the text regarding where the feature’s environment directory can be, as it turns out that hiera.yaml is configured to look under the standard directory /etc/puppet/environments/%{environment}/hieradata.

    For this to work easily, it’s probably a good idea to create a group puppetmasters and add all users who can modify/test puppet to this group. Then make /etc/puppet/environments/ writable for this group. This way, there’s no need to become root to create and modify a test environment.

     
  • simon 13:35 on 11 January 2013 Permalink | Reply  

    Monitoring multiple vhosts with nagios/icinga/shinken 

    At work, we had a desire to monitor vhosts on a webserver, some webservers have lots of vhosts and they may even change relatively often. The same is true for tomcat servers, though they are usually on a different port…

    I looked around but I couldn’t find any specific solution to monitor vhosts in large quantities and be flexible enough to work on a different port as well.

    So I wrote my own nagios-plugin. It works well enough and it requires some sort of content-string to match what comes back from the server. This isn’t always easy when you run lots of instances of the same software (like geoserver).

    The script uses an input file, so you can change the vhosts to monitor, without restarting nagios.

    I use the following command definitions:

    1. 'check_vhosts" command definition

    define command {
    command_name check_vhosts
    command_line $USER1$/check_vhosts -H $ARG1$ -f $ARG2$ -v 1
    }

    1. 'check_vhosts_alt" command definition

    define command {
    command_name check_vhosts_alt
    command_line $USER1$/check_vhosts -H $ARG1$ -p 8080 -f $ARG2$ -v 1
    }

    If you’re running the vhost normally behind a high availability service (pound/relayd/haproxy) you would be able to check the vhost’s availability, but not the backend’s vhost availability. With this script, you can specify the host-address to use (its IPnumber) so any proxy will be bypassed, which is how I’ve configured it here (-H). If you leave this out, the check will use the vhost’s fqdn in the full URL.


    define service {
    use generic-service
    host_name www1,www2
    check_interval 60
    service_description VHOSTS
    check_command check_vhosts!$HOSTADDRESS$!$_HOSTVHOSTFILE$
    }

    define service {
    use generic-service
    host_name tomcat1,tomcat2,tomcat3
    check_interval 60
    service_description VHOSTS-alt
    check_command check_vhosts_alt!$HOSTADDRESS$!$_HOSTVHOSTFILE$
    }

    As you can see, it uses custom macro’s from the host definition, so you can have one place to manage the service definition and each host defines it’s own (_VHOSTFILE) variable to specify the list of vhosts for that server.

     
  • simon 10:06 on 8 January 2013 Permalink | Reply  

    using git-prompt.sh with colour hints 

    Now v1.8.1 is out (it was released on December 31st, 2012) the code to make a prompt with git’s status using colour is now in a stable release.

    To use it, you don’t need to install the entire release, if you just want to use the colour option, just copy the file contrib/completion/git-prompt.sh to a safe place (e.g. ~/.git-prompt.sh) and make sure you source it in your .bashrc, along with some lines to actually active the code. I’ll show here how to do that.

    in your ~/.bashrc add these lines:

    if [ -f ~/.git-prompt.sh ]; then
    . ~/.git-prompt.sh
    GIT_PS1_SHOWDIRTYSTATE=true
    GIT_PS1_SHOWCOLORHINTS=true
    GIT_PS1_UNTRACKEDFILES=true
    PROMPT_COMMAND="__git_ps1 '\u@\h:\w' '\\$ '"
    fi

    And you’ll get a prompt with colour when inside a directory with a git repository.

    The code does the following:

    • test if ~/.git-prompt.sh exists and source it
    • set variables to activate showing git’s state in the prompt, when __git_ps1 is called
      1. GIT_PS1_SHOWDIRTYSTATE; show a * to indicate unstaged files or + for staged files
      2. GIT_PS1_SHOWSTASHSTATE; show that something is in the stash ($)
      3. GIT_PS1_SHOWUNTRACKEDFILES; show files in the current directory that are not being tracked by git
      4. GIT_PS1_SHOWUPSTREAM=”auto”; show upstream status (can be further customised
      5. GIT_PS1_SHOWCOLORHINTS; show in colour what the DIRTY state is, must be used in combination with GIT_PS1_SHOWDIRTYSTATE and PROMPT_COMMAND mode
    • define PROMPT_COMMAND, which is a command ( a function; __git_ps1 in this case) and in order to get __git_ps1 to work properly in this mode, we need to give it two (or three) parameters. The first argument is what is put in the PS1 variable before the status of the git tree, the second argument defines what comes after the status of the git tree in PS1.
    • By default (with just 2 arguments), __git_ps1 will put the string with branch information in a printf format string ” (%s)”. If that isn’t how you want it, you can add a third parameter with a custom format string.

    Examples:

    1. branch before the prompt:

      PROMPT_COMMAND="__git_ps1 '' '\u@\h:\w\\$ ' '%s:'"

    2. branch between square brackets:

      PROMPT_COMMAND="__git_ps1 '\u@\h:\w' '\\$ ' ' [%s]' "

    You may be wondering: “Why aren’t you setting the PS1 variable?”. The thing is that PROMPT_COMMAND is setting this variable, so we don’t have to (it will be overwritten anyway).

    Bash will call the function defined for PROMPT_COMMAND every time it is going to prompt the user for a new command.

    Some more links about PROMPT_COMMAND:

    The “old” way still works, but colour isn’t possible using command substitution:

    PS1='\u@\h:\w $(__git_ps1 "(%s)")\$ '
    or
    PS1='\u@\h:\w `__git_ps1 "(%s)"`\$ '

    edit 9 Jan 2013: explain variables more thoroughly and how the command substitution mode is used.

    Post scriptum (3 Jan 2014): Someone else has done this too: https://github.com/magicmonty/bash-git-prompt

     
  • simon 21:21 on 7 January 2013 Permalink | Reply  

    git prompt in colour 

    Git 1.8.1 is out and it contains a few lines of code from me 🙂

    A bit of history…

    A long time ago, when my colleages at at-computing were messing with git, I was playing with putting the branch name and a colour for the different states of a git tree inside the prompt, so whenever I would enter a directory with a git repository it would show up in the prompt and moreover it would show whether there were changed files present (the colour of the branchname in the prompt would change to red and/or yellow if the changes were already staged).

    The way this worked was by calling a function using command substitution from the PS1 (since this is evaluated by bash every time the prompt is printed).

    PS1='\u@\h:\w $(gitprompt)\$ '

    This was good enough for me, and I even accepted that my commandline wrapping got messed up. I didn’t know or cared enough about this and I called it a bug in bash (sorry, it wasn’t).

    A few years later, I ran into a file (I forget why or how, but I did) in my debian installation called git in /etc/bash_completion.d/. Basically it did the same, but both better and worse. It had far more sophisticated code to finger the git repository and it was faster and had no wrapping issues, but it had no colours 🙁

    I thought: well, it’s free and open, so I can just modify it to print colours.
    And I did, it wasn’t that hard, but now it had the wrapping issue again (I didn’t notice, since I was used to it). I proudly posted my solution to the git mailinglist and, of course, people noticed problems…

    • it had wrapping issues, someone mentioned PROMPT_COMMAND
    • it had the wrong colours (I should have used only colours already used in git when it uses colours)
    • did it work with zsh, like the existing version

    Ok, back to the drawingboard, searching for a solution to the wrapping problem I found out that bash requires \[ and \] around terminal command codes that produce zero length prompt string output, like beeps and colours.

    I figured out how to do it with PROMPT_COMMAND, but I lazily copied the __git_ps1 function to create __git_ps1_pc, since it was so different to set the PS1 using that way.
    I posted my “solution” again and this time I got some more comments:

    • I was duplicating code
    • it didn’t work with zsh
    • still wasn’t using the right colours

    So again I went coding away, fixed all of the issues and after some good suggestions from Junio C. Hamano (the git maintainer) I managed to get it into an acceptable state.

    However, this wasn’t the end of it. A while later, I got a question about the usage of git-prompt.sh in a release candidate. It turned out the documentation wasn’t clear yet. After some e-mails I thought the issue was fixed (but I hadn’t provided a patch). Later, I was showing the colourful prompt to some people and I noticed the improved documentation was not yet in the RC2.

    I mentioned this on the mailinglist and in return I found that Junio wasn’t entirely happy with the situation and the code. I further modified the code and documentation to fix the most urgent worries, but in the end, I’m also not entirely happy with the end result in 1.8.1.

    The issues that needs fixing is the hacky way to differentiate between command-substition mode, which works in bash and zsh and is more or less as it was, and PROMPT_COMMAND mode. The way the function switches modes is to count the parameters, 0 or 1 for command substition mode, 2 or 3 for PROMPT_COMMAND mode.

    I’d rather have different functions for the different modes, but the trick will be to not have duplicated code to maintain.

    In a next post, I will explain how to use the code as it is now…

    /Simon

     
  • simon 08:36 on 28 June 2012 Permalink | Reply
    Tags: linux bash latex sed script   

    bash (sed) script to convert Latex chapter levels to section 

    When working with latex, the book mode has an additional level called chapter, but in article or report mode, this isn’t available, then “section” is the top level. Here’s a small bash+sed script to convert from book (with chapter) to the other styles… (back is a different matter, esp, since subsubsection is usually the lowest level in both modes, so you can’t convert subsubsection to subsubsubsection…)

    #!/bin/bash
    
    if [ $# -lt 1 ]
    then
            echo "need file arg" >&2
            exit 1
    fi
    
    sed -i.bak -e '/subsubsection/s/\\subsubsection{.*}/& %subsubsection in chaptermode/' \
            -e '/\\subsection{/s//\\subsubsection{/' \
            -e '/\\section{/s//\\subsection{/' \
            -e '/\\chapter{/s//\\section{/' "$1"
    
    
    
     
  • simon 14:23 on 23 April 2012 Permalink | Reply
    Tags: pf firewall   

    openbsd firewall rules to get in from the not default gw 

    In our setup we have at least two potential default gateways available for our office network. The office network is behind a pf firewall, which, among other tasks, is doing NAT for the office network.

    One of the uplinks is relatively cheap and the other is used to connect our servers in the datacentre. For stability we want to use the server uplink for internal management (remote login) and for the rest of the Internet traffic (you know, people watching youtube etc.) we want to use the cheap cable provider.

    As I’m relatively new to pf, I figured I’d be able to find some good examples on how to use pf to facilitate this kind of setup, but no, nobody seems to have put up a good example of how to do this.

    After some digging, I found that reply-to is probably the solution, but no example syntax is provided anywhere, so eventually I turned to IRC to figure out a possible way to let pf understand what I want.

    Returning back to the problem, basically the default route determines where Internet traffic goes out and this trumps any states kept by pf about incoming connections.

    Incoming connections are rare in our setup, but they are used to work on office pc’s from home. In pf this can be done using rdr-to syntax to translate from the external address of the firewall to the internal (rfc1918) address.

    When the incoming connection comes in on the firewall’s non-default external port, a nasty thing happens. Returning traffic goes out the other port, and pf cannot match that traffic to the state of the incoming traffic.

    That is, unless you tell pf to watch for and reconnect this return traffic to the existing state. (I have no idea why anyone would not want this to happen).

    Ok, so here’s the simplified configuration (It looks somewhat to similar parts in our openbsd 4.9 pf.conf file):

    # definitions
    ext_if_a="em1" #datacentre link
    ext_if_b="em2" #cheap link
    int_if="em3" #office network
    
    a_gw="a.a.a.1/32"
    b_gw="b.b.b.1/32" # default route!
    
    a_ip="a.a.a.2/32"
    b_ip="b.b.b.2/32"
    
    off_net="192.168.1/24"
    
    admin_at_home="c.c.c.c/32"
    office_adminhost="192.168.1.10/32"
    
    #NAT rules (NB, one for each possible external interface)
    
    match out on $ext_if_a inet from $off_net nat-to $ext_if_a  
    match out on $ext_if_b inet from $off_net nat-to $ext_if_b  
    
    # incoming management connection and a hint using reply-to for pf to connect returning traffic to the existing state
    pass in on $ext_if_a proto tcp from $admin_at_home to $a_ip rdr-to $office_adminhost port ssh reply-to ($ext_if_a $a_gw)
    

    This seems to work, though it feels a bit like a hack to me.

     
c
compose new post
j
next post/next comment
k
previous post/previous comment
r
reply
e
edit
o
show/hide comments
t
go to top
l
go to login
h
show/hide help
shift + esc
cancel