Sunday, November 17, 2013

Is that laughter?

Augh! I departed from my normal practice of compiling everything I use from source code. (I used a software package for indexing the documents in my search engine.) 

As part of the indexing function, the new tool copies the output of the indexer to the user interface, to show successful/unsuccessful database runs. I just noticed that the very last line says:

  rotating indices: succesfully sent SIGHUP to searchd

This is my penance for using packaged software. My choices are:
  1. Ignore the word and silently obsess over it
  2. Download the source, fix the spelling, and compile the code from scratch
  3. Contact the package maintainer and annoy him/her until they fix it
  4. Contact the author and annoy him/her until they fix it
Since options 2-4 take time, I'll probably employ #1 until I'm so annoyed with myself that I get up in the middle of the night, fix the code, compile it, and send a patch to the author. Oh, and accept the scolding from Bernadette for waking her.

This after leaving a community because a few newer members were being too pedantic. Somewhere, I hear Karma snickering.

Saturday, November 16, 2013

Down hard again

It appears that 757.org is down hard, except for IRC.  Because of this, I'm re-enabling the blog on Wordpress.  Until I can find a (cheap) home for the wiki, it'll be offline.

I've moved most of the posts from the old blog to here.  If I've omitted any that I shouldn't, please point them out.

In the mean time, I'll be playing with themes and plugins here.

Sunday, September 15, 2013

Virginia Beach Hamfest 2013

I attended Virginia Beach Hamfest 2013 this weekend. Participation was a bit limited in that none of the major vendors attended. Notable this year was the absence of a MFJ vendor. I may need to switch to making my major purchases at FrostFest, vice the VB Hamfest.

The usual attendees were missing: Sparks (moved away), Tracy (moved away), and even Ethan and Matt. Was there something else going on elsewhere this weekend?

In any case, I was able to pick up a PowerGate and a RigRunner, both built by West Mountain Radio. I'm hoping that I can pick up the ISOpwr module at the next FrostFest. I'd planned on dropping this rig into the Invisible Car, but after the third rear-ender, coupled with the batteries crapping out, I decided to get rid of it. I'm hoping that I can get it into the non-hybrid Civic with not too much extra fuss.

After that, I only need a decent battery and a small Pelican case, and I should be ready for the next outage/exercise. Of course, there's the usual tweaking planned (a small generator, a more-capable radio, etc.).

Note: I "borrowed" the idea for this rig from Chris Hosman (KC4F), who built a number of semi-portable rigs for the Hampton ARES group. Given that I do most of my comms on the road (antennas tend to irritate wife and neighbors), it looked like something that I needed to build for myself.

Sunday, August 4, 2013

TWUUG's Super Summer Saturday Meeting

Attended the Tidewater Unix User Group's annual Saturday meeting (they normally are the 2nd Thursday of the month). Got rid of two of my books, picked up a new one as a door prize.

The talks were interesting, though turn-out was a bit thin (mostly the long-time members). I wasn't planning on attending as I had intended to show up at either the North side or South side Makerspace Open Houses (the South side ended up with 60+ visitor). A last minute call from a friend changed my plans.

There were three talks, in addition to the usual before-it-gets-underway discussion. The talks were on podcasting, fldigi (amateur radio) (thanks Tracy), and eBooks (thanks Matthew).

Caught up with Dave S. on a few things that we've been planning. Now that I have most of the RPi/Z-Wave project out of the way, I can refocus on the digital signage projects. Finally had the chance to ask Mark D. what he was using for his sign. He indicated that the back end is just a bash script and indicated that someone was rewriting it in Python. Have been thinking what his statement implies and need to catch up with Dave S. again.

In any case, I need to get the backdrop in the office cleaned up again and get the podcasting rig dusted off. We may have some work to do in the near future.

Sunday, July 28, 2013

Real-time meta data from Icecast using LiquidSoap (reprise)

I found a bug in my script. It pops up when there's an apostrophe or a paren in the song title. Below, I've modified the script slightly to fix this issue. Also, I've changed the external program call so that it uses notify-send to pop up the song title on my desktop.







set("log.file",false)
set("log.stdout",false)
set("log.level",3)
def apply_metadata(m) =
title = m["title"]
#print("Now playing: #{title}")
system("notify-send #{quote(title)}")
endradio = input.http(
user_agent="joats/kludged-up-script",
timeout=30.,
poll_delay=.5,
new_track_on_metadata=true,
force_mime="audio/mpeg",
buffer=.5,
id="original",
"http://music.joat:8000/airtime_128"
)

radio = on_metadata(apply_metadata,radio)
output.dummy(fallible=true,radio)

For those that can't see it, the changes are all in the line starting with "system".

Real-time meta data from Icecast using LiquidSoap

One of the annoying things about trying to pull metadata from Icecast is that it's a "pull". This is typically cron'd and can be as much as a minute "late". The following LiquidSoap script fixes this issue, allowing for a metadata "push".

The following listens to an Icecast stream and only extracts the metadata. It does not forward any audio. The below becomes valuable when you want to post "Now Playing" data to digital signage, IRC, or Jabber channels.







set("log.file",false)
set("log.stdout",false)
set("log.level",3)

def apply_metadata(m) =
title = m["title"]
print("Now playing: #{title}")
# system("~/Desktop/mytest.bash '#{title}'")
end

radio = input.http(
user_agent="joats/kludged-up-script",
timeout=30.,
poll_delay=.5,
new_track_on_metadata=true,
force_mime="audio/mpeg",
buffer=.5,
id="original",
"http://music.joat:8000/airtime_128"
)

radio = on_metadata(apply_metadata,radio)
output.dummy(fallible=true,radio)

Note that there's one line commented out in the above. It's there as an example, for when you want to pass the variable to an external script. About the only other line you'd need to change is the one containing "music.joat". Point that at your Icecast server.

Note: "output.dummy" is needed, to keep LiquidSoap from complaining that there's no output defined.

Monday, July 22, 2013

Please just STFU!

Maybe it's just me getting old but I miss the "good old days" of the Internet, where you could ask a question (or search) online and you'd get an answer or volunteers to help figure it out. Seeing as how I've been on the Internet in some or or other since the 80's (yeah, I'm an old fart), it's probably my age.

That being said, a long-running trend I've noticed is the tendency to respond to questions without actually answering them. Responses tend to fall into one of a stereotypical category. Example: while researching the implementation of dynamic DNS servers, I came across a mailing list thread that irked me a bit. The answers in the thread fell into one of a number of categories:

  • The nonspecific answer: set up a server and write a script.

  • The LMGTFY answer: search Google for it.

  • The fake offer of help: I'd help but I'm too busy.

  • The not answering the question answer: Responding to a response instead of the initial question (start a new thread, dammit!).

  • The "didn't understand the question" answer: Missed it by that much.

  • The cut and paste expert's answer: Someone attempting to make themselves knowledgeable via blatant plagarism.

  • The actual answer: It is usually: 1) the shortest response, 2) the last response, and 3) posted months or years after the initial question was asked.


Please! If you're not answering the question, you're just adding to the background noise. And, yeah, above is the reason that I vote down answers in ServerFault, StackOverflow, and SuperUser.

Friday, July 12, 2013

Docbook admonitions

For the better part of a year, I've been attempting to get Docbook to produce admonition graphics (i.e., note, important, and warning). Having worked with Publican, I wanted a similar format without all of the baggage that comes with Publican.

Publican fans should make note that I'm using a home-grown web editor for my Docbook work. I save brain cells by not having to remember which switches to use when running xsltproc at the command line. Publican was a nightmare in getting it to work with a similar interface (ask if you want either).

In any case, I've switched from running Docbook 4.5 to 5.0. The "good":

  • The syntax is cleaned up a bit.

  • Admonition graphics now work.

  • One more thing off of my "to do" list!


The bad:

  • I had to rewrite a chunk of the web editor's code to get it to work with 5.0.

  • The syntax checker is more rigid than the previous version (see closing tag discussion below)


The really ugly:

  • Primarily, everything that I've written to date. The older version was tolerant of missing close tags if a more-senior opening tag was declared. This is no longer the case. It appears that all closing tags must be explicit.

  • Then there's my hand-drawn graphics. The same gene set that prevents anyone in my family from being able to carry a tune, appears to also prevent any of us from being able to draw anything attractive. I need to find a nice set of "note", "important", and "warning" icons that are Creative Commons licensed, so that I can start throwing my docs up online.


The only other thing left to do is to fix the syntax content and to tweak the features on the web editor. Notes in the wiki and tool plus docs online shortly.

Sunday, July 7, 2013

Scripted XChat tab renaming II

Following is a slight modification to the XChat tab renaming script from 21 May. One thing that I noticed was that, if I closed a channel before running the script, the script would change the name of the server to whatever channel was missing.

In the following, the context line grabs the context for the specific channel. If the channel doesn't exist, context is set to 0. The fix is just a simple check for the status of the context variable. If it's set to 0, it skips renaming the channel.








#!/bin/bash

#following renames specific channels in XChat2

# "#192.168.2.215_docs" to "docs"

context=`dbus-send --dest=org.xchat.service --print-reply --type=method_call /org/xchat/Remote org.xchat.plugin.FindContext string:"bitlbee" string:"#192.168.2.215_docs" | tail -n1 | awk '{print $2}'`

if [ "$context" -ne 0 ]
then

dbus-send --dest=org.xchat.service --type=method_call /org/xchat/Remote org.xchat.plugin.SetContext uint32:$context

dbus-send --dest=org.xchat.service --type=method_call /org/xchat/Remote org.xchat.plugin.Command string:"settab docs"

fi

# repeat the above, as needed, for any other channels

The above works with my set up and no longer impacts the server names in XChat.

Friday, July 5, 2013

Visit to the 757 Makerspace

There's a new makerspace going up nearby. I finally had the time to visit it today. Although it's a bit further from home, it's still within an hour's drive from the house (through non-workday traffic).

With apologies to all concerned, I'm forced to compare it with the now-closed 757Labs hackerspace as that's the only other "space" I've ever visited.

  • The building is a bit older than the now-closed 757Labs, and it's much more industrial.

  • The location is in an industrial area. 757Labs was in a business area. I see this as an advantage as we can make more maker-type noises without worrying about upsetting the neighbors.

  • There's no hang-out facilities in the makerspace. This was an ongoing point of contention at 757Labs, between those there "to accomplish actual work" and those there to "network" (okay, to hang out on one of the couches).

  • Although there seems to be much the same equipment in both spaces, the makerspace has dedicated locations for them (i.e., a locked room for the equipment which requires special training (e.g., the laser cutter)). Maybe it's just that the makerspace isn't (yet?) overrun with people's in-progress projects.

  • There's a restaurant right across the street which has been described as having a "full menu". I'll hold my opinion until I've visited it.

  • There's just so much more space, though fewer chairs. Hopefully it'll force a focus on work (and cleaning up after yourself).

  • The dues appear to be around twice what 757Labs required.

  • Web site here. Facebook page here.

  • Oh, and parking! We won't talk about the parking issues experienced at 757Labs. Let's just say that, from the looks of it, the makerspace just has more.


I'm looking forward to being a member and getting some of my on-hold projects completed. First need is some cases for Raspberry Pi's with different daughterboards installed (e.g., PiFace, RazBerry).

Sunday, June 30, 2013

Fixed permanent links

I'm embarrassed. I'd started re-using the old Blosxom code from 2003, that I'd heavily modified over the years, to re-post this blog on 757.org. The one thing that I didn't do was test the links in each of the stories. It was only when someone tried to link to one of my posts (sorry Wim), that anyone noticed they were all broken.

To make it short, I've regenerated the entire site and pushed it back onto 757.org. If anyone sees any other bugs, please give a shout.

Wednesday, June 26, 2013

Oh puhlease...

It's been awhile since I've discussed anything security related on this blog. This is mostly because I set that ball down every day at 4:30 and don't want to pick it back until 8:00 on the next workday. However, this article on Slashdot has me spun up enough that I'm willing to gripe about it. I can see this being picked up by the mainstream media and yet another bout of fear-mongering making the rounds. Kibo help us if they rediscover that Wemo video.

As a business idea, this is really cool. The service vendor only needs to stand up one web server which accepts commands and sends them back into the user's network. Very little needs to be stored or processed on the web server, yet the vendor gets to pull in $8 or $9 from each "premium" customer. The consumer also hands over money for the hardware.

Those who use cloud based controls to manage electrical appliances, without strong authentication and strong encryption, are taking big risks (and, no, a username and password, encrypted by SSL may not meet those requirements). If you're going to manage environmental controls over the Internet, do it on your own server and require a non-split VPN to access them. Better yet, manage those controls via a network that is entirely isolated from external access.

The primary countermeasure for the mentioned direct attacks on the protocol or the devices is: maintain a baseball bat at each of the exits from your house. Z-wave and Zigbee are very low power, very low bandwidth communications protocols, meaning if there's a direct attack on your components, the attacker is probably within view of your front or back stoop. The technical term "mechanical agitation" comes to mind.

If you want to management your environmental controls and your appliances, avoid the public services. Instead stand up your own controller/gateway, and avoid putting it on the Internet. If you don't like the DIY approach, use one of the Mi Casa Verde products (or similar vendor's product). If you do like the DIY approach, build your own with a Raspberry Pi, a Razberry interface, and an XBee interface. Both approaches are cheaper than what you'll end up paying the public services and, if you're a coder, they're also more expandable/extendable.

dnscrypt-proxy with BIND9 on the Raspberry Pi

Ever since my service provider started injecting data to replace DNS "lookup failed" messages (so that your browser showed one of their ads), I've been looking for a means to avoid using their DNS servers. Initially, it was easy. I just used a different forwarder. The ads returned when my service provider started intercepting those (my guess: via a dnsmasq variant).

This was an issue because I monitor a number of DNS entries for friends, family, and customers (as well as web server front pages and other network services). My service provider removing all "lookup failed" messages, and intercepting outbound queries, limited to what I could watch for via DNS queries.

The next step in this DNS arms race appeared to be dnscrypt-proxy. By default, it encrypts DNS queries and sends them out over port 443, to OpenDNS's name server (which also "speaks" dnscrypt). Note: you are able to change both the port and the name server, if needed.

I've added notes to the wiki for adding dnscrypt-proxy to the Raspberry Pi, including a start up script, using the code from Git Hub (vice the distributed tarball), and getting it to work as a forwarder for BIND9.

There were two drawbacks that I discovered. One is: my service provider has since discontinued the practice of injecting ads into DNS queries, while OpenDNS has taken up injecting their search page info into DNS failed-query responses. The other is: there is an issue with DNSSEC. While dnscrypt-proxy supposedly does work with DNSSEC, OpenDNS currently breaks it.

If anyone knows of a "trustable" public DNS server that works with both dnscrypt and DNSSEC, and doesn't inject traffic, please let me know. In the mean time, I'll continue to use the tool with OpenDNS, 'cause their service has other features that I like having (e.g., filtering of objectionable web sites).

Monday, June 24, 2013

Messing with bleeding edge tech

Spent a sizeable chunk of the weekend playing with the cutting edge. In this case, it was hand gesture recognition and Hadoop.

I'm slowly making my way through the free Hadoop Fundamentals I course, offered by BigDataUniversity. It got off to a slow start because of the VM included with the class. It's only available via a self-extracting RAR set, requiring a Windows machine to rebuild it. I had to install a WIndows VM, extract the RAR, THEN convert it to an ESXi VM so that I could access it. I've managed to work my way into the lab at the end of Lesson 2 so far.

Another part of the weekend was more or less wasted on my trying to get hand gesture recognition up and running. I have this idea that I can get it set up with a modified web cam so that I can gesture in a darkened room and cause the lisghts to come on.

I managed to open one of my junk box web cams up and removed the filter over the sensor. It works pretty well, requiring only minimal light in the room. The hard part (still) appears to be the software needed to recognize a hand gesture and get it to run the WeMo software. I did get OpenCV built and am able to track my face (via one of the demos). Getting it to recognize hands and hand shapes still seems to be on a build-your-own level. In short, much more research (on my part) is needed.

Sunday, June 23, 2013

Catching up

Finally had the chance to catch up on some of the wiki notes. The following articles were added (in no particular order):

  • Making an internal web server available on an external server via SSH

  • Adding Festival to Xchat2

  • Anonymous Proxies

  • Slowing down SSH brute force attacks

  • Monitoring the Temperature on a Raspberry Pi

  • Cross-referencing spreadsheets in Excel

  • Counting multiple columns in Excel

  • Macro - Copying data values between Excel spreadsheets

  • Resetting the root password for MySQL

  • Numeric sorting in MySQL

  • Fixing the Ubuntu SSH shutdown script

  • Fixing jerky Flash video

  • What to do if Fedora updates very slowly

  • Turning off Join and Part notices in XChat2

  • Start up script for ZNC

  • Compiling XV on Ubuntu

  • Opening PDFs in Chrome

  • How to add a password to your private SSH key


You can find them in the wiki by searching for an appropriate keyword, or by clicking on "Recent Changes" at the bottom of the page.

Tuesday, June 18, 2013

Setting up ddclient on the Raspberry Pi

Following are my notes for configuring ddclient on the RPi so that it updates Hurricane Electric's dynamic DNS.

First off, my configuration:

  • I run dyndns on my router to update DynDNS (for a dynamic domain). This is independent of the below and exists for other reasons.
  • I run ddclient on the Raspberry Pi (for a dynamic host/IP in a permanent domain) to allow my XMPP server to be "visible" on the Internet.
  • I purchased a domain from hover.com and edited the nameserver records so that they point to Hurricane Electric's free DNS servers.

Mathieu Lutfy has a good howto for doing setting up ddclient. It only needed a little tweaking to get it working for my use. Steps:

  1. Point a browser at https://dns.he.net.
  2. Log on (upper left) or register (see opening paragraphs) and then log on.
  3. Set up your domain in the Hurricane Electric DNS server (outside of the scope of this howto).
  4. To add a dynamic record, click on "New A".
  5. Enter the hostname that will be dynamic and check the box (near the bottom) for "Enable entry for dynamic dns". This will populate the IP and TTL entries.
  6. Click submit. This will take you back to your zone listing.
  7. In the DDNS column, you should see a "refresh" icon. Click on that. It will ask you to provide the key to be used for authenticating the host with the service. Near the bottom is a green "Generate" button that will auto-generate the key. This key will be the password in your ddclient.conf file.
  8. Install ddclient via the usual means (for Ubuntu, this would be "apt-get install ddclient"). Immediately, manually kill the ddclient process as the shutdown script does NOT work properly and will not kill the daemon. You will want to wait five minutes (the TTL setting from the above) before running the service again.
  9. In the meantime, use your favorite editor to edit /etc/ddclient.conf. Make it look something like the below:
      # Configuration file for ddclient generated by debconf
    #
    # /etc/ddclient.conf

    pid=/var/run/ddclient.pid
    protocol=dyndns2
    use=web
    server=dyn.dns.he.net
    login=yourhost.yourdomain.com
    password=TheKeyGeneratedInStep7
    yourhost.yourdomain.com

    With the exception of the obvious hostname entries, the above differs from Mathieu Lutfy's configuration only on the "use" line. His configuration indicates that his box is directly connected to the Internet in that "use=if, if=ppp0" grabs the IP locally and provides it to the DNS server. The configuration above indicates that ddclient is running on an internal box (i.e., behind by firewall). "use=web" causes the server end to determine the external IP.

  10. Start the ddclient service and check your log files (for Ubuntu, this is /var/run/daemon.log). You should see an entry somewhat like:
      Jun 18 05:30:57 pi1 ddclient[15458]: SUCCESS: updating yourhost.yourdomain.com: good: IP address set to YourExternalIP

    If you don't see that, ddclient will generate a number of entries which can be used to troubleshoot the error. Please heed any warning to not attempt update before 5 minutes has passed (sending a corrected entry too soon will be ignored).

Thursday, June 13, 2013

Running misc. stuff on the GoFlex

It started as an effort to reduce the house power bill and has grown a bit. The home server draws some serious power and coverts a lot of it to heat (nice in the winter, not so nice in the summer).

I've been playing with various low-end devices (access points, file servers, etc.) to see what's usable. The one that appears to be the most useful (so far) is a converted GoFlex Home, which now runs a variant of Arch Linux.

It's currently running Apache, PHP, and MySQL, supporting TT-RSS (a RSS feed aggregator/reader), two instances of SemanticScuttle (a bookmark manager)(one for short term storage, the other for long term), and a copy of PmWiki. I'm looking to add OwnCloud, BIND (DNS), WOL (to wake up the big server), knockd (for remote access), and some soft of XMPP aggregator (Bitlbee, maybe?).

So far, it seems to be holding its own, even though it's a little slow in presenting the Ajax interface for TT-RSS. The one thing that I did have to do in support of this was to add a swap partition to beef up the memory.

I'm still undecided if I should host the main wiki on the GoFlex as it has more active content on it than the GoFlex likes (I tried it). Maybe move the active content to it's own page on the big server and post the more-or-less static stuff on the GoFlex? It has a 2TB drive attached to it so it'll at least have a backup copy of all of the kruft that I've gathered since I first started messing with computers in '84 (yeah, CompuServe/pre-Internet days).

Update: Putting the big wiki on the GoFlex didn't work well as there's just too much extra code running in the background (status monitors, presence indicators, etc.).

Wednesday, June 12, 2013

How to open the PogoPlug (v4) case

Not that you'll need it for anything, but if you're ever interested in opening a version 4 PogoPlug, use the following steps:

  • On the bottom, peel back the two rubber feet which are closest to the jacks. The adhesive employed will allow you to stick them back on later.

  • Remove both screws

  • Use a spudger or similar tool to pry the base from the case. Note: the most obvious seam (the one that runs from above the jacks on the back to below the logo on the front) is just decorative. The base piece is only about a quarter-inch thick. Looking at the PogoPlug from the side, try putting the spudger into the tip of the pointy part. (picture below)

  • Work the spudger around the seam. You'll hear various clicks as plastic tabs release (note: some will likely break). Continue working the spudger until you have the case open.


newseam

Wednesday, June 5, 2013

Running Openfire on the RPi

It appears that the only extra thing that you need to get Openfire up and running on the Raspberry Pi is patience (lots of it). Mostly it's because the RPi is so much slower than a normal desktop system.

Installing the prerequisites takes longer. There's quite a few if you're installing all of the pre-reqs on a clean install of Raspbian. I cheated a bit and used a MariaDB install running on a nearby GoFlex (also a bit slow).

Performing the configuration piece of Openfire is what takes the longest. Below is a temperature graph of the RPi in use. The spike, just after midnight on Tuesday, is about an hour long. This was only the piece between filling in the administrator's password and clicking next. This is where the install scripts go off and populate the database, add plugins, and generally tie up the processor.

pitemp-day

At the time of this post, Openfire has been running about 2 days without issue. According to the "Server Information" page in Openfire, it's running version 3.8.2 on about 15 MB of memory and top is reporting a server load that fluctuates between 0.03 and 0.10. Java being the most active process.

This weekend, I'll need to install all of the other plugins and the two bots which watch the DMS. I'm hoping it doesn't eat up too much more processing cycles.

Sunday, June 2, 2013

Invasion of the Pedants

I've been a member of a certain online community for just about a decade. Admittedly, I was dragged there (kicking and screaming) by a friend. While he eventually moved to focus on other things, I hung around and provided support in the background in the form of research/review, writing/debugging code, and authoring occasional verbiage.

Sadly, I've just let the community know that I was ceasing all active support. While a certain few can still request custom work of me, the remainder of the community should feel free to ignore me as actively, as passionately, and as incessantly as humanly possible. Please!

What had happened was that a pedant announced that specific words should no longer be employed in documentation because it reminds a certain demographic of their disability. Being a member of that demographic, I took offense at the suggestion and pointed out that the logic he used to justify his suggestion implies a certain prejudiced over-sensitivity by the disabled. I publicly responded and noted such.

It turns out that, where I thought that I was dealing with a single pedant, he had compatriots. In short, the community reacted by taking me to task for top posting in email, rather than addressing the offensive logic.

Fewer things can kill a community effort quicker than a couple of pedants, especially if they have any sort of authority within that community. While I wish the overarching project good will, I suspect that the associated volunteer effort will suffer in the coming years.

What triggered the above rant was this post about what appears to be an effort to make a specific pronunciation mandatory. In short, the author has submitted formal suggestions to "standardize" pronunciation. Hopefully, the group will give the suggestion the time it deserves (i.e., none).

So, to the group which I've just left (as a whole): 'Bye. The annual get-togethers were fun/interesting. We even saved a life once.

To the pedants in the group: GFY. You deserve each other. Same goes for any pedant which this post offends.

To the other group (Ubuntu): good luck. I'd considered joining, but it appears that you also have a pedant infestation. Maybe later.

Thursday, May 30, 2013

WeMo and Linux

Finally got the WeMo to join my home network. The trick with the Android phone was: sheer stubbornness. I just kept hitting the green "Networks" button until it saw my home network. This after a week of messing with various Python scripts and Miranda.

Once it was on the home network, getting it to work from the command line was mostly straight-forward. My thanks to Eric Blue for the Perl library. The one issue that I did run into was that I learned that the belkin.db file is not portable, across machines. In short, each machine that will control the WeMo needs to "discover" it and record its own "belkin.db" file.

For the record, the WeMo is connected to the home network via WPA-PSK, on channel 11. I didn't need to set up any special channels or networks. However, I did need to install Net::UPnP::ControlPoint before discovery would work.

Notes will be in the wiki shortly (link at top of page).

Monday, May 27, 2013

Broken feed

My apologies. Just realized that, when I changed the tag line a couple weeks back, I'd used an illegal character in the description field. I've fixed it and ran the feed through a validator to check. It now only complains about the age of the namespace I'm using. To fix part of this, I'm pushing it through FeedBurner. The link for it is here.

I've also removed some of the junk from the front page and from the code that generates the static content. Please bare with me while I trim the mold off of the 10+ year-old code.

Monitoring the RPi's temp

Following is a simple Munin plugin that will monitor the temperature on your Raspberry Pi. Assumption is that you have Munin installed on the RPi.







#!/bin/sh

case $1 in
config)
cat <<'EOM'
graph_title RPi Temperature
graph_vlabel temp
temp.label C
EOM
exit 0;;
esac

echo -n "temp.value "
/opt/vc/bin/vcgencmd measure_temp|cut -d'=' -f2|cut -d\' -f1


Save the above as a file called "pitemp". You'll also want to edit /etc/munin/plugin-conf.d/munin-node and add the following entry:







[pitemp]
user root

Above will show up in the system's "Other" category. You can alter this by adding a line to the script. See Munin's "How to Write Plugins" page for more detail.

Sunday, May 26, 2013

Auto-update not necessarily good

Following is for my own notes and for anyone that needs 'em. Below is "the hard way" because I couldn't remember the changes that I'd made to the admin account's name and password.

Sat down yesterday to read the various RSS feeds with TT-RSS. Instead of the login screen, was met with a screen with a number of complaints about SPHINX, MEMORY, and a couple about crypto. The one at the top was "FEED_CRYPT_KEY should be exactly 24 characters in length" (it's what I searched with).

The cause turned out to be an automatic update for TT-RSS. The new version didn't like the old config.php file, because it was missing a number of new settings (four, IIRC). This part was easily fixable. I made a backup copy of the old config file (just to be safe), and then copied the new config.php-dist over the old config.php. In short:







cp config.php config.php.bak.20130525
cp config.php-dist config.php

Editing the usual settings in the new config.php fixed the getting-to-the-login screen issue, but I still couldn't log in. Instead, TT-RSS started responding with "Your access level is insufficient to run this script." The script remained unnamed.

A little bit of research later, it turned out that I needed to change a setting in the database. What was happening was that the new code was attempting to update the database schema but my account didn't have sufficient access to the MariaDB database.

With apologies to Postgres users, the following fix is for TT-RSS users who employ a MySQL or MariaDB back-end. In short, access the database from the command line and run the following:







use ttrss;
select id,login,access_level from ttrss_users;
update ttrss_users set access_level=10 where id=2;
quit

Of course, you'll need to edit the above to suit your own instance. The purpose of the select line was to check the access_level for my account. The update line gives my login admin-level access. The next time I logged in, I followed the prompts, TT-RSS updated the database, and everything went back to normal and I was able to log in.

To clean up, you'll need to set your access level back to the original value. Use the above steps to change access_level back to 0 (for a regular user account) or 5 (for a power_user account). An alternative would be to use preferences to reset the admin account password, log in with that account, and use "Preferences - Users" to change your user account's access back to that for a normal user.

The above needed to be done because I couldn't remember the admin username and password. If you do remember it, when TT-RSS complains about insufficient access, just log in with the admin credentials and skip the changes to the database.

Thursday, May 23, 2013

Getting myself in trouble

My wife once discovered me, laying on the floor of the office, mostly under the desk, not visibly moving. She panicked until she figured out that I was screwing the mounting bracket for a switch to the bottom of my desk. Ever since, there's been a hardware moratorium at our house.

Of course, there's been exceptions. She did buy me the Raspberry Pi last December. It's what I've been up to the last few nights, though, that may be ruining that exception or maybe even triggering a Crazy-Eddie-The-Used-Car-Salesman event (i.e., "everything must go!").

In an effort to cut back on the electric bill, I'm moving a number of "core" services off of the desktop computer (which draws 300W 24x7) and placing them on a "slightly modified" GoFlex Home and the RPi. This will lower the power drain by about 90 percent (less than 30W).

The dangerous part: the only place to put them (to be able to use wake-on-LAN to remotely start the house server) is on my wife's desk. Her desk has a hutch, which causes the monitor to hide a bunch of empty space behind it. I took advantage of it being dark back there and positioned the GoFlex in there with the LED facing where it can't be seen.

Earlier this evening, I was looking for an innocuous place to put the RPi. I discovered that her monitor has VESA mounts. Since the other half of her present to me was a VESA-mount case for the RPi, I wonder if she'll notice a couple extra cables (power & Ethernet) running to the back of her monitor. We'll see...

Tuesday, May 21, 2013

Scripted Xchat tab renaming

Following is a shell script that will rename a specific tab in Xchat. Reason I came up with this is that Bitlbee produces some very ugly channel names in Xchat. In one of the much older, branch versions of Bitlbee, there was a function that allowed you to rename the channel. This has since gone away.

While poking around in the Xchat command set, I found the settab command which allows you to temporarily (until the next join) rename a channel. A bit of further research showed a dbus interface to Xchat. From there, it was simple.








#!/bin/bash

#following renames specific channels in Xchat2

# "#192.168.2.215_docs" to "docs"

context=`dbus-send --dest=org.xchat.service --print-reply --type=method_call /org/xchat/Remote org.xchat.plugin.FindContext string:"bitlbee" string:"#192.168.2.215_docs" | tail -n1 | awk '{print $2}'`

dbus-send --dest=org.xchat.service --type=method_call /org/xchat/Remote org.xchat.plugin.SetContext uint32:$context

dbus-send --dest=org.xchat.service --type=method_call /org/xchat/Remote org.xchat.plugin.Command string:"settab docs"

The configurable bits are the two "string" entries in the line starting with "context" and the "settab docs" entry in the last line beginning with "dbus-send". Note: if you make an error in the "string" entries, the channel name for the previous call will be changed. (You'll know what I mean when it happens.)

Chain a bunch of these in a single script and you can mass rename channel tabs. Add a desktop launcher for the script and you can mass rename Xchat tabs with a single mouse click. Enjoy!

Saturday, May 18, 2013

Eating my own dog food

When working on Unix systems with other people, I often point out that rule #1 is "Read The Fine Manual". Alternative phrasing can include: "What's the man page say?" or "You should search Google for that." The idea being that they should get used to looking things up. Asking questions should be the method-of-last-resort.

That being said, it looks like I need to be eating my own dog food. At one point, early in the development (it may have been a branch) of Bitlbee, there was the ability to rename the channel. Instead of having to live with "#identica_packetgeek", you could make the channel name look like "identica". This function was short-lived as it wasn't retained in later versions.

In setting up a new document management system, complete with Statusnet instance to track document updates, I needed to point Bitlbee at the new interfaces (Statusnet and XMPP). I don't remember why it was that I was reading the XChat2 docs, but I stumbled across the "/settab" commend. It's exactly what I'd needed.

Now my Bitlbee channel list looks like: "news", "docs", "identica", and "twitter", instead of "#192.168.2.215_news", "#192.168.2.215_docs", "#identica_packetgeek", and "#twitter_packetgeek". Much, much cleaner!

Now you'll have to excuse me while I take myself out back and beat myself up.

Saturday, May 11, 2013

Modifying SemanticScuttle to open a new window

One of the annoying things in SemanticScuttle is that clicking on a listed link opens that link in the current window. This is easily fixed:

  1. Find bookmarks.tpl.php and open it in your favorite text editor.

  2. Fidn the line containing the word "taggedlink" and add the following (inside of the href brackets):  target="_NEW"


The line should look something like:

    <a href="'. htmlspecialchars($address) .'"'. $rel .' target="_NEW">'

That's it. Clicking on a link should now open the listed bookmark in a new tab or window (depending on how you have your browser configured).

Building my own search engine

(From the How Hard Could It Be? Department) Google Reader's pending retirement caused me to start looking for alternative readers. I decided that Tiny Tiny RSS was about the best so went about attempting to install it on the house server. Right away, it complained about the version of PHP employed.

Knowing that this would cause trouble with the KnowledgeTree instance, it looked like a general upgrade was needed. Discovered a problem: KnowledgeTree no longer offers their Community Edition. Ignoring the fact that this effectively angers anyone that ever contributed code to the project, this left me in a difficult spot: run TT-RSS in a VM or come up with alternatives.

After looking at various other DMS software, I started reading about how search engines index documents. Terms like inverse indexing, relative scoring, and soundexes have become familiar. After experimenting with various text management tools and a number of databases, I'm more in awe of Google (and Bing, somewhat) now than I was before.

All that being said... In the interim, I'm running the last available Community Edition of KnowledgeTree, running on the latest version of PHP, with known work-arounds in place (there's a growing number of them).

I'm also learning about some of the obstacles that are inherent with indexing documents. Example: (and it's a horror) PDF appears to be a typesetting language. Even the good text extractors have issues with it. The letter "f" and bolding causes no end of problems in extracted text (there's usually a space after each "f" and bolding tends to produce "doubled" characters in extracted text).

Hopefully, at the end of all this, I'll have a simple program to index all of the documents (pdf, doc, & txt) that I've gathered over the years. If "simple" is unrealistic, I'll probably shoot for "portable".

Wednesday, May 1, 2013

Getting there

Attempting to get the wiki online. There seems to be an issue with file ownership and write permissions. I'll be working on it over the next few days.

Fixing stuff

It's starting to look like it will take a very long time to correct all the munging that Posterous did to each post's formatting and punctuation as they are all manual. I've decided to put it up as-is, work on a local copy, and push the updates to the blog on a weekly basis. It keeps the blog online, without interruption.

Tuesday, April 30, 2013

Update

Just before pushing the blog back onto the 757 server, I did a test build and noticed a couple things that will delay the push.

A quick "du -sh" check shows that the blog is 13GB in size!?? That's not right. Issue turned out to be in the rendering settings. Having the Blosxom script generate statics pages and encoding for xml entities adds a non-short string of 2525252525's to the link for each story. Setting $encode_xml_entities to 0 fixed this.

The content's life on other blogging systems played havoc with its content. I probably should have guessed that they'd re-encode various pieces but it's become a bit ugly. I'm having to read through each story (all 3000 of them) and fix paragraph tags, remove gratuitously inserted line breaks, and reset all non-standard character encodings.

End result is a delay and development of a strong opinion that anyone who wirtes custom markup languages for web sites (in this age of AJAX and CSS) deserves a sharp kick in the shins (both of 'em!) by someone wearing steal-toed boots.

Sunday, April 28, 2013

...and we're back!

Since Posterous has been acquired, the blog is falling back into its old haunts (those at 757). I managed to find most of the static-page-generating code that originally generated the blog (i.e., a highly customized Blosxom instance).The one thing lost by converting Blosxom to Wordpress to Posterous (and back again) is the Categories. I'll be debugging the code over the coming weeks, fixing various bugs as they pop up, and generally sorting posts back into Categories. If you see anything annoying, send me an email. (Note: stuff may move around a bit.)

In any case, I have a serious backlog of stuff to post. I just need to get back into the practice of writing short articles and posting them.