Friday, October 14, 2016

pypi-mirror

This week I implemented a custom solution for creating a pypi mirror for only packages necessary for whatever projects we deploy to production. Openstack has a great solution for this in github, however it was a bit dated and had some deprecated options used in the code which caused some issues. I've since forked their repo and made changes so that it all works as it should. You can see my changes here and pull the code for your own use. I am currently in the process of having it approved and merged into OpenStack's master pypi-mirror repo.

Sunday, April 17, 2016

3 Steps to fixing "ldconfig empty, not checked" due to corrupted pkgs.

Worked this out today, figured I'd toss it on the blog for fun and hopefully to help others.

 1) Get the initial list of empty libs from pacman
cat /var/log/pacman.log | grep --text empty | awk '{print $6}' | sort -u >> /tmp/liblist
2) Determine which packages they belong to so you know which need to be re-installed
for X in $(cat /tmp/liblist); do pkgfile -s $X;done | sort -u >> /tmp/pkglist
3) Force reinstall the packages
sudo pacman -Syyy && for X in $(cat /tmp/pkglist); do sudo pacman -S --noconfirm --force $X; done
Be sure to understand these commands and what they are doing, don't just blindly run commands you find on the web. In my case these all worked out perfectly for me, they should in most cases. If something bad should come of you running these commands, I am not liable. :-P

Wednesday, March 11, 2015

Koji Autobuild from git

This week I took some time to write a solution to automatically trigger koji builds upon committing to a privately hosted git repo. The code is in a pub repo on github, bear in mind that this is the first working code release with many improvements to come. It's also my first attempt at netcode so any tips or constructive criticisms are welcome.

Repo @
https://github.com/aeboccia/koji-buildfromgit

An rpm package of koji-bfg will be coming soon, for now the repo has instructions for setting up the listener and git hook.

Note:
Currently I have only written service support for Systemd, I am sure there are SysV boxes with Koji instances running, I plan to create initscripts for the service eventually.

Tuesday, February 3, 2015

Koji Copy Signed

Recently I implemented a Koji RPM Build Server at my place of work. When it came time to signing packages before mashing repo's I was faced with a small dilemma. Sigul signing server is a great solution for signing hundreds of packages and moving them to the correct destination on disk for mash to pick them up from and mash together a repo. However for my use we would not need something so robust as we only would require a hundred or so packages in total, thus I set out in search of a simpiler solution. Fortunately I found one. A koji plugin by the name of sign.py written by Paul B Schroeder <paulbsch "at" vbridges "dot" com>. It is a neat little plugin which signs packages at build time. The issue I ran into was that the packages would be signed at build then left in /mnt/koji/packages/pkgname/#/#/arch/package.rpm Mash when using strict_keys for packages looks under /mnt/koji/packages/pkgname/#/#/data/signed/keyid/arch/package.rpm for the signed packages to mash into a repo. I plan on eventually implementing this change directly into the plugin but since I was in a hurry I whipped up a quick script to run in between mash crons which copies the signed rpms to the correct location for mash to pickup.

I will admit this is a bit redundant since the packages are already signed at build time and can be mashed into a repo just fine provided I don't set strict_keys with mash. I prefer this method as it ensures packages mashed into repos's have the key I specify. In terms of disk space I rationalize this concern with the idea that if i were to implement sigul the rpms would be copied to the same signed dir as my solution here so really either way I'd be eating up space in two places for the same RPM.

The script can be found on github @ copy_signed.py

Saturday, November 15, 2014

Killing some time with python

I was bored again tonight, so I wrote a quick, very simple, Average Calculator in python.
----------------------------------------------------------------------------------------------------------------------------------------
#!/usr/bin/python

#Modules to import
import sys

#Little Welcome message
print ("Welcome to avgcalc!\n")

#Get input from the user
user_input = input("Enter a list of numbers separated by a <space>: ")

#Separate those entered numbers by space
numbersent = map(int, user_input.split())

#Generate a list of the numbers
final_list = list(numbersent)

#Count how many numbers entered
num_of_nums = 0
for i in final_list:
    num_of_nums += 1

#Add up all the numbers
total_of_nums = 0
for y in final_list:
    total_of_nums = y + total_of_nums

#Divide the total sum of the numbers by the number of numbers entered to find the average
avg = total_of_nums / num_of_nums

#Print the final result
print ("The average of", final_list, "is", avg)
----------------------------------------------------------------------------------------------------------------------------------------

Friday, November 7, 2014

Quick Benchmarks of Raspi NFS + External Raid Enclosure

Here are some results of some quick and dirty benchmarks, I feel they give enough of an idea of the performance you can expect from a Raspi B -> USB 2.0 HDD Raid1.

1.1GB File

268MB File



As you can see the transfer rates are rather consistent at an average of about  7.6 MB/S.

Raspi NFS Server + External Raid1 2 Bay HDD Enclosure

I decided I wanted to replace my x86 server with something a little more "arm" powered. Below is just a small post with the steps taken to setup my raspi nfs server. I used the raspi for now as a POC since it kind of lacks due to only having fast eth and no esata/usb 3.0.

 Shopping List:   
  1. 2 X Western Digital Caviar Black 1.0TB HDD's 
  2. 1 X RaspberryPi Model B with ArchArm
  3. Startech External 2Bay Raid Enclosure



Since I am using the PI I utilized the USB 3.0's backwards compatibility and plugged it into the Pi via USB.



The Pi should see the raid as 1 device

I didn't bother to partition, you can if you like. I formatted the entire drive(s) using xfs.

Next I setup fstab for my new mount point

Next setup NFS export

Finally start the services

Once all of the above was complete I was able to mount the share on my desktop. I will post some transfer speed stats soon. Since this POC has worked I decided to grab a slightly beefier ARM machine to serve NFS. CuBox-i4Pro will replace my second Pi, it has gigE and E-SATA support which should boost performance drastically over the Pi.