[or-cvs] r22617: {arm} Rewrite of the first third of the interface, providing vastl (in arm/release: . init interface interface/graphing util)

Damian Johnson atagar1 at gmail.com
Wed Jul 7 16:48:51 UTC 2010


Author: atagar
Date: 2010-07-07 16:48:51 +0000 (Wed, 07 Jul 2010)
New Revision: 22617

Added:
   arm/release/armrc.sample
   arm/release/init/project113.py
   arm/release/interface/graphing/
   arm/release/interface/graphing/__init__.py
   arm/release/interface/graphing/bandwidthStats.py
   arm/release/interface/graphing/connStats.py
   arm/release/interface/graphing/graphPanel.py
   arm/release/interface/graphing/psStats.py
   arm/release/util/conf.py
   arm/release/util/sysTools.py
   arm/release/util/torTools.py
Removed:
   arm/release/interface/bandwidthMonitor.py
   arm/release/interface/connCountMonitor.py
   arm/release/interface/cpuMemMonitor.py
   arm/release/interface/graphPanel.py
   arm/release/interface/graphing/__init__.py
   arm/release/interface/graphing/bandwidthStats.py
   arm/release/interface/graphing/connStats.py
   arm/release/interface/graphing/graphPanel.py
   arm/release/interface/graphing/psStats.py
Modified:
   arm/release/
   arm/release/ChangeLog
   arm/release/README
   arm/release/TODO
   arm/release/init/starter.py
   arm/release/interface/__init__.py
   arm/release/interface/confPanel.py
   arm/release/interface/connPanel.py
   arm/release/interface/controller.py
   arm/release/interface/fileDescriptorPopup.py
   arm/release/interface/headerPanel.py
   arm/release/interface/logPanel.py
   arm/release/util/__init__.py
   arm/release/util/connections.py
   arm/release/util/hostnames.py
   arm/release/util/log.py
   arm/release/util/panel.py
   arm/release/util/uiTools.py
Log:
Rewrite of the first third of the interface, providing vastly improved performance, maintainability, and a few very nice features.
added: settings are fetched from an optional armrc (update rates, controller password, caching, runlevels, etc)
added: system tools util providing simplified usage, suppression of leaks to stdout, logging, and optional caching
added: wrapper for accessing TorCtl providing:
  - client side caching for commonly fetched relay information (fingerprint, descriptor, etc)
  - singleton accessor and convenience functions, simplifying interface code
  - wrapper allowing reattachment to new controllers (ie, arm still works if tor's stopped then restarted - still in the works)
change: full rewrite of the header panel, providing:
  - notice for when tor's disconnected (with time-stamp)
  - lightweight redrawing (smarter caching and moved updating into a daemon thread)
  - more graceful handling of tiny displays
change: rewrite of graph panel and related stats, providing:
  - prepopulation of bandwidth information from the state file if possible
  - observed and measured bandwidth stats (requested by arma)
  - graph can be configured to display any numeric ps stat
  - third option for graphing bounds (restricting to both local minima and maxima)
  - substantially reduced redraw rate and making use of cached ps parameters (reducing call volume)
fix: preventing 'command unavailable' error messages from going to stdout, which disrupts the display (caught by sid77)
fix: removed -p option due to being a gaping security problem (caught by ioerror and nickm)
fix: crashing issue if TorCtl reports TorCtlClosed before the first refresh (caught by Tas)
fix: preventing the connection panel from initiating or resetting while in blind mode (caught by micah)
fix: ss resolution wasn't specifying the use of numeric ports (caught by data)
fix: parsing error when ExitPolicy is undefined (caught by Paul Menzel)
fix: revised sleep pattern used for threads, greatly reducing the time it takes to quit
fix: bug in defaulting the connection resolver to something predetermined to be available
fix: stopping connection resolution (and related failover message) when tor's stopped
fix: crashing issue when trying to resolve addresses without network connectivity
fix: forgot to join on connection resolver when quitting
fix: revised calculation for effective bandwidth rate to take MaxAdvertisedBandwidth into account




Property changes on: arm/release
___________________________________________________________________
Added: svn:mergeinfo
   + /arm/trunk:22227-22616

Modified: arm/release/ChangeLog
===================================================================
--- arm/release/ChangeLog	2010-07-07 16:44:54 UTC (rev 22616)
+++ arm/release/ChangeLog	2010-07-07 16:48:51 UTC (rev 22617)
@@ -1,5 +1,37 @@
 CHANGE LOG
 
+6/7/10 - version 1.3.6
+Rewrite of the first third of the interface, providing vastly improved performance, maintainability, and a few very nice features.
+
+    * added: settings are fetched from an optional armrc (update rates, controller password, caching, runlevels, etc)
+    * added: system tools util providing simplified usage, suppression of leaks to stdout, logging, and optional caching
+    * added: wrapper for accessing TorCtl providing:
+          o client side caching for commonly fetched relay information (fingerprint, descriptor, etc)
+          o singleton accessor and convenience functions, simplifying interface code
+          o wrapper allowing reattachment to new controllers (ie, arm still works if tor's stopped then restarted - still in the works)
+    * change: full rewrite of the header panel, providing:
+          o notice for when tor's disconnected (with time-stamp)
+          o lightweight redrawing (smarter caching and moved updating into a daemon thread)
+          o more graceful handling of tiny displays
+    * change: rewrite of graph panel and related stats, providing:
+          o prepopulation of bandwidth information from the state file if possible
+          o observed and measured bandwidth stats (requested by arma)
+          o graph can be configured to display any numeric ps stat
+          o third option for graphing bounds (restricting to both local minima and maxima)
+          o substantially reduced redraw rate and making use of cached ps parameters (reducing call volume)
+    * fix: preventing 'command unavailable' error messages from going to stdout, which disrupts the display (caught by sid77)
+    * fix: removed -p option due to being a gaping security problem (caught by ioerror and nickm)
+    * fix: crashing issue if TorCtl reports TorCtlClosed before the first refresh (caught by Tas)
+    * fix: preventing the connection panel from initiating or resetting while in blind mode (caught by micah)
+    * fix: ss resolution wasn't specifying the use of numeric ports (caught by data)
+    * fix: parsing error when ExitPolicy is undefined (caught by Paul Menzel)
+    * fix: revised sleep pattern used for threads, greatly reducing the time it takes to quit
+    * fix: bug in defaulting the connection resolver to something predetermined to be available
+    * fix: stopping connection resolution (and related failover message) when tor's stopped
+    * fix: crashing issue when trying to resolve addresses without network connectivity
+    * fix: forgot to join on connection resolver when quitting
+    * fix: revised calculation for effective bandwidth rate to take MaxAdvertisedBandwidth into account
+
 4/8/10 - version 1.3.5 (r22148)
 Utility and service rewrite (refactored roughly a third of the codebase, including revised APIs and much better documentation).
 

Modified: arm/release/README
===================================================================
--- arm/release/README	2010-07-07 16:44:54 UTC (rev 22616)
+++ arm/release/README	2010-07-07 16:48:51 UTC (rev 22617)
@@ -69,7 +69,7 @@
 status.
 
 That said, this is not a terribly big whoop. ISPs and anyone sniffing your
-connection already has this data - the only difference is that instead of
+connection already have this data - the only difference is that instead of
 saying "I am talking to x" you're saying "I'm talking to x, who's x?", meaning
 the resolver's also aware of who they are.
 
@@ -88,10 +88,11 @@
 ./
   arm       - startup script
   
-  ChangeLog - revision history
-  LICENSE   - copy of the gpl v3
-  README    - um... guess you figured this one out
-  TODO      - known issues, future plans, etc
+  armrc.sample - example arm configuration file with defaults
+  ChangeLog    - revision history
+  LICENSE      - copy of the gpl v3
+  README       - um... guess you figured this one out
+  TODO         - known issues, future plans, etc
   
   screenshot_page1.png
   screenshot_page2.png
@@ -102,16 +103,18 @@
     prereq.py  - checks python version and for required packages
   
   interface/
+    graphing/
+      __init__.py
+      graphPanel.py     - (page 1) presents graphs for data instances
+      bandwidthStats.py - tracks tor bandwidth usage
+      psStats.py        - tracks system information (by default cpu and memory usage)
+      connStats.py      - tracks number of tor connections
+    
     __init__.py
     controller.py          - main display loop, handling input and layout
     headerPanel.py         - top of all pages, providing general information
     
-    
-    graphPanel.py          - (page 1) presents graphs for data instances
-    bandwidthMonitor.py    - (graph data) tracks tor bandwidth usage
-    cpuMemMonitor.py       - (graph data) tracks tor cpu and memory usage
-    connCountMonitor.py    - (graph data) tracks number of tor connections
-    logPanel.py            - displays tor, arm, and torctl events
+    logPanel.py            - (page 1) displays tor, arm, and torctl events
     fileDescriptorPopup.py - (popup) displays file descriptors used by tor
     
     connPanel.py           - (page 2) displays information on tor connections
@@ -121,9 +124,12 @@
   
   util/
     __init__.py
+    conf.py        - loading and persistence for user configuration
     connections.py - service providing periodic connection lookups
     hostnames.py   - service providing nonblocking reverse dns lookups
     log.py         - aggregator for application events
     panel.py       - wrapper for safely working with curses subwindows
-    uiTools.py     - helper functions for interface
+    sysTools.py    - helper for system calls, providing client side caching
+    torTools.py    - wrapper for TorCtl, providing caching and derived information
+    uiTools.py     - helper functions for presenting the user interface
 

Modified: arm/release/TODO
===================================================================
--- arm/release/TODO	2010-07-07 16:44:54 UTC (rev 22616)
+++ arm/release/TODO	2010-07-07 16:48:51 UTC (rev 22617)
@@ -1,124 +1,193 @@
 TODO
 
+- Roadmap for next release (1.3.7)
+  [ ] refactor panels
+      Currently the interface is a bit of a rat's nest (especially the
+      controller). The goal is to use better modularization to both simplify
+      the codebase and make it possible to use smarter caching to improve
+      performance (far too much is done in the ui logic). This work is in
+      progress - /init and /util are done and /interface is partly done. Known
+      bugs are being fixed while refactoring.
+        [ ] log panel
+          - option to clear log
+          - allow home/end keys to jump to start/end
+              also do this for the conn panel and conf panel (request by dun)
+          - make log parsing script stand alone, with syntax hilighting, regex,
+              sorting, etc
+        [ ] conf panel
+          - move torrc validation into util
+          - condense tor/arm log listing types if they're the same
+              Ie, make default "TOR/ARM NOTICE - ERR"
+          - fetch text via getinfo rather than reading directly?
+              conn.get_info("config-text")
+        [-] conn panel (for version 1.3.8)
+          - check family connections to see if they're alive (VERSION cell
+              handshake?)
+          - fallback when pid or connection querying via pid is unavailable
+              List all connections listed both by netstat and the consensus
+          - note when connection times are estimates (color?), ie connection
+              was established before arm
+          - connection uptime to associate inbound/outbound connections?
+          - Identify controller connections (if it's arm, vidalia, etc) with
+              special detail page for them
+        [-] controller (for version 1.3.8)
+  [ ] provide performance ARM-DEBUG events
+      Help with diagnosing performance bottlenecks. This is pending the
+      codebase revisions to figure out the low hanging fruit for caching.
+  [ ] tor util
+        [X] wrapper for accessing torctl
+        [ ] allow arm to resume after restarting tor (attaching to a new torctl
+            instance)
+  [ ] setup scripts for arm
+        [ ] setup scrpt to add to /usr/bin/arm (requested by ioerror)
+        [ ] look into CAPs to get around permission issues for connection
+            listing sudo wrapper for arm to help arm run as the same user as
+            tor? Irc suggestions:
+              - man capabilities
+              - http://www.linuxjournal.com/article/5737
+        [-] provide Debian repository (for version 1.4.0)
+            Look into debian packaging, note system call dependencies, and mail
+            submit at bugs.debian.org with subject "RFP: arm" and starting with a
+            line "Package: wnpp". Also add to 'deb.torprojec.org'. (requested
+            by helmut)
+              * http://www.debian.org/doc/maint-guide/
+              * http://www.debian.org/doc/packaging-manuals/python-policy/
+              * http://showmedo.com/videotutorials/video?name=linuxJensMakingDeb
+  * release prep
+    * check performance of this version vs last version (general screen refresh
+        times)
+    * pylint --indent-string="  " --disable-msg-cat=CR interface/foo.py | less
+    * double check __init__.py and README for changes
+
 - Bugs
-	* Mac OSX and BSD have issues with netstat options
-			Reported that they aren't cross platform. Possibly use lsof as a 
-			fallback if an issue's detected.
-			notify John Case <case at sdf.lonestar.org>
-			caught by Christopher Davis
-	* torrc validation doesn't catch if parameters are missing
-	* revise multikey sort of connections
-			Currently using a pretty ugly hack. Look at:
-			http://www.velocityreviews.com/forums/
-				t356461-sorting-a-list-of-objects-by-multiple-attributes.html
-			and check for performance difference.
-	* header panel isn't properly detecting catch-all exit policies
-			Missing edge cases
-	* avoid hostname lookups of private connections
-			Stripped most of them but suspect there might be others (have assertions
-			check for this in a debug mode?)
-	* exit policy checks aren't handling all inputs
-			Still need to handle masks, private keyword, and prepended policy,
-			currently erroring on the side of caution.
-	* not catching events unexpected by arm
-			Future tor and TorCtl revisions could provide new events - these should
-			be given the "UNKNOWN" type.
-	* regex fails for multiline log entries
-	* when logging no events still showing brackets
-			The current code for dynamically sizing the events label is kinda
-			tricky. Putting this off until I've made a utility to handle this
-			uglyness.
-	* scrolling in the torrc isn't working properly when comments are stripped
-			Current method of displaying torrc is pretty stupid (lots of repeated
-			work in display loop). When rewritten fixing this bug should be trivial.
-	* quitting can hang several seconds when there's hostnames left to resolve
-			Not sure how to address this - problem is that the calls to 'host' can 
-			take a while to time out. Might need another thread to kill the calls?
-			Or forcefully terminate thread if it's taking too long (might be noisy)?
+  * util are assuming that tor is running under the default command name
+      attempt to determine the command name at runtime (if the pid is available
+      then ps can do the mapping)
+  
+  * log panel:
+    * not catching events unexpected by arm
+        Future tor and TorCtl revisions could provide new events - these should
+        be given the "UNKNOWN" type.
+    * regex fails for multiline log entries (works for two lines, but not more)
+    * test that torctl events are being caught (not spotting them...)
+    * torctl events have their own configurable runlevels (provide options for
+        this)
+    * when logging no events still showing brackets
+        The current code for dynamically sizing the events label is kinda
+        tricky. Putting this off until revising this section.
+  
+  * conf panel:
+    * torrc validation doesn't catch if parameters are missing
+    * scrolling in the torrc isn't working properly when comments are stripped
+        Current method of displaying torrc is pretty stupid (lots of repeated
+        work in display loop). When rewritten fixing this bug should be
+        trivial.
+    * "ExitPolicy" entry in torrc (without path)
+        Produces "May 26 22:11:03.484 [warn] The abbreviation 'ExitPolic' is
+        deprecated. Please use 'ExitPolicy' instead". This is an error in the
+        torrc parsing when only the key is provided.
+  
+  * conn panel:
+    * revise multikey sort of connections
+        Currently using a pretty ugly hack. Look at:
+        http://www.velocityreviews.com/forums/
+          t356461-sorting-a-list-of-objects-by-multiple-attributes.html
+        and check for performance difference.
+    * replace checks against exit policy with Mike's torctl version
+        My version still isn't handling all inputs anyway (still need to handle
+        masks, private keyword, and prepended policy). Parse it from the rest
+        of the router if too heavy ("TorCtl.Router.will_exit_to instead").
+    * avoid hostname lookups of private connections
+        Stripped most of them but suspect there might be others (have assertions
+        check for this in a debug mode?)
+    * connection uptimes shouldn't show fractions of a second
+    * connections aren't cleared when control port closes
 
 - Features / Site
-	* rewrite codebase
-			Currently the interface is a bit of a rat's nest (especially the
-			controller). The goal is to use better modularization to both simplify
-			the codebase and make it possible to use smarter caching to improve
-			performance (far too much is done in the ui logic). This work is in
-			progress, having started with the initialization (/init) and now
-			concerning the utilities (/util). Migrating the following to util:
-				- os calls (to provide transparent platform independence)
-				- torrc validation
-				- wrapper for tor connection, state, and data parsing (abstracting
-					TorCtl connection should allow for arm to be resumed if tor restarts)
-	* provide bridge statistics
-			Include bridge related data via GETINFO option (feature request by
-			waltman).
-	* provide performance ARM-DEBUG events
-			Help with diagnosing performance bottlenecks. This is pending the
-			codebase revisions to figure out the low hanging fruit for caching.
-	* condense tor/arm log listing types if they're the same
-			Ie, make default "TOR/ARM NOTICE - ERR"
-	* graph for arm cpu/mem usage
-			Trivial to implement but not sure if this would be helpful.
-	* startup option to restrict resource usage or set refresh rate
-	* audit tor connections
-			Provide warnings if tor misbehaves, checks possibly including:
-				- ensuring ExitPolicyRejectPrivate is being obeyed
-				- check that ExitPolicy violations don't occure (not possible yet since
-					not all relays aren't identified)
-				- check that all connections are properly related to a circuit, for
-					instance no outbound connections without a corresponding inbound (not
-					possible yet due to being unable to correlate connections to circuts)
-	* add page that allows raw control port access
-			Piggyback on the arm connection, providing something like an interactive
-			prompt. In addition, provide:
-				- irc like help (ex "/help GETINFO" could provide a summary of getinfo
-				commands, partly using the results from "GETINFO info/names")
-				- tab completion and up/down populates previous entries
-				- warn and get confirmation if command would disrupt arm (for instance
-				'SETEVENTS')
-				- 'guard' option that restricts to GETINFO only	(start with this)
-				- issue sighup reset
-	* provide observed bandwidth
-			Newer relays have a 'w' entry that states the bandwidth and old versions
-			have client side measurements (third argument in 'Bandwidth' of
-			descriptor, note that it's in KB/s). Label the former (server side) as 
-			'Measured' and later (client side) as 'Observed' to differentiate.
-			requested by arma
-	* show advertised bandwidth
-			if set and there's extra room available show 'MaxAdvertisedBandwidth'
-	* check family connections to see if they're alive (VERSION cell handshake?)
-	* look into providing UPnP support
-			This might be provided by tor itself so wait and see...
-	* unit tests
-			Primarily for util, for instance 'addfstr' woudl be a good candidate.
+  * check if batch getInfo/getOption calls provide much performance benefit
+  * layout (css) bugs with site
+      Revise to use 'em' for measurements and somehow stretch image's y-margin?
+  * page with details on client circuits, attempting to detect details like
+      country, ISP, latency, exit policy for the circuit, traffic, etc
+  * attempt to clear controller password from memory
+      http://www.codexon.com/posts/clearing-passwords-in-memory-with-python
+  * escaping function for uiTools' formatted strings
+  * tor-weather like functionality (email notices)
+  * provide bridge / client country statistics
+      - Include bridge related data via GETINFO option (feature request by
+      waltman).
+      - Country data for client connections (requested by ioerror)
+  * make update rates configurable via the ui
+      Also provide option for saving these settings to the config
+  * config option to cap resource usage
+  * dialog with flag descriptions and other help
+  * switch check of ip address validity to regex?
+      match = re.match("(\d*)\.(\d*)\.(\d*)\.(\d*)", ip)
+      http://wang.yuxuan.org/blog/2009/4/2/python_script_to_convert_from_ip_range_to_ip_mask
+  * audit tor connections
+      Provide warnings if tor misbehaves, checks possibly including:
+        - ensuring ExitPolicyRejectPrivate is being obeyed
+        - check that ExitPolicy violations don't occur (not possible yet since
+          not all relays aren't identified)
+        - check that all connections are properly related to a circuit, for
+          instance no outbound connections without a corresponding inbound (not
+          possible yet due to being unable to correlate connections to circuits)
+  * check file descriptors being accessed by tor to see if they're outside the
+      known pattern
+  * allow killing of circuits? Probably not useful...
+  * add page that allows raw control port access
+      Start with -t (or -c?) option for commandline-only access with help,
+      syntax highlighting, and other spiffy extras
+      
+      Piggyback on the arm connection, providing something like an interactive
+      prompt. In addition, provide:
+        - irc like help (ex "/help GETINFO" could provide a summary of getinfo
+        commands, partly using the results from "GETINFO info/names")
+        - tab completion and up/down populates previous entries
+        - warn and get confirmation if command would disrupt arm (for instance
+        'SETEVENTS')
+        - 'guard' option that restricts to GETINFO only  (start with this)
+        - issue sighup reset
+  * menu with all torrc options (making them editable/toggleable)
+  * Setup wizard for new relays
+      Setting the password and such for torrc generation (idea by ioerror)
+  * menus?
+      http://gnosis.cx/publish/programming/charming_python_6.html
+  * look into better supporting hidden services (what could be useful here?)
+  * look into providing UPnP support
+      This might be provided by tor itself so wait and see...
+  * unit tests
+      Primarily for util, for instance 'addfstr' woudl be a good candidate.
+  * Investigations of other possible tools:
+    * look into additions to the used apis
+        - curses (python 2.6 extended?): http://docs.python.org/library/curses.html
+        - new control options (like "desc-annotations/id/<OR identity>")?
+        - look deeper into TorCtl functions (has a resolve function? hu?)
+    * whois lookup for relays? ISP listing?
+    * look into what sort of information tcpdump and iptraf provides (probably
+        can't use for privacy reasons)
+    * vnstat, nload, mrtg, and traceroute
 
 - Ideas (low priority)
-	* python 3 compatability
-			Currently blocked on TorCtl support.
-	* bundle script that dumps relay stats to stdout
-			Django has a small terminal coloring module that could be nice for
-			formatting. Could possibly include:
-				- desc / ns information for our relay
-				- ps / netstat stats like load, uptime, and connection counts, etc
-			derived from an idea by StrangeCharm
-	* show qos stats
-			Take a look at 'linux-tor-prio.sh' to see if any of the stats are 
-			available and interesting.
-	* localization
-			Abstract strings from code and provide on translation portal. Thus far
-			there hasn't been any requests for this.
-	* provide option for a consensus page
-			Shows full consensus with an interface similar to the connection panel.
-			For this Mike's ConsensusTracker would be helpful (though boost the
-			startup time by several seconds)
-	* provide Debian repository for arm
-			Look into debian packaging, note system call dependencies, and mail
-			submit at bugs.debian.org with subject "RFP: arm" and starting with a line
-			"Package: wnpp".
-			requested by helmut
-	* follow up on control-spec proposal
-			Proposal and related information is available at:
-			http://www.atagar.com/arm/controlSpecProposal.txt
-			
-			Unfortunatley this doesn't seem to be going anywhere so mothballed for
-			now.
+  * python 3 compatibility
+      Currently blocked on TorCtl support.
+  * bundle script that dumps relay stats to stdout
+      Django has a small terminal coloring module that could be nice for
+      formatting. Could possibly include:
+        - desc / ns information for our relay
+        - ps / netstat stats like load, uptime, and connection counts, etc
+      derived from an idea by StrangeCharm
+  * show qos stats
+      Take a look at 'linux-tor-prio.sh' to see if any of the stats are 
+      available and interesting.
+  * localization
+      Abstract strings from code and provide on translation portal. Thus far
+      there hasn't been any requests for this.
+  * provide option for a consensus page
+      Shows full consensus with an interface similar to the connection panel.
+      For this Mike's ConsensusTracker would be helpful (though boost the
+      startup time by several seconds)
+  * follow up on control-spec proposals
+      Proposal and related information is available at:
+      http://archives.seul.org/or/dev/Jun-2010/msg00008.html
 

Copied: arm/release/armrc.sample (from rev 22616, arm/trunk/armrc.sample)
===================================================================
--- arm/release/armrc.sample	                        (rev 0)
+++ arm/release/armrc.sample	2010-07-07 16:48:51 UTC (rev 22617)
@@ -0,0 +1,82 @@
+# startup options
+startup.controlPassword
+startup.interface.ipAddress 127.0.0.1
+startup.interface.port 9051
+startup.blindModeEnabled false
+startup.events N3
+
+features.colorInterface true
+
+# general graph parameters
+# interval: 0 -> each second,  1 -> 5 seconds,  2 -> 30 seconds,
+#           3 -> minutely,     4 -> half hour,  5 -> hourly,      6 -> daily
+# bound:    0 -> global maxima,        1 -> local maxima, 2 -> tight
+# type:     0 -> None, 1 -> Bandwidth, 2 -> Connections,  3 -> System Resources
+# frequentRefrsh: updates stats each second if true, otherwise matches interval
+
+features.graph.interval 0
+features.graph.bound 1
+features.graph.type 1
+features.graph.maxSize 150
+features.graph.frequentRefresh true
+
+# ps graph parameters
+# primary/secondaryStat: any numeric field provided by the ps command
+# cachedOnly: determines if the graph should query ps or rely on cached results
+#             (this lowers the call volume but limits the graph's granularity)
+
+features.graph.ps.primaryStat %cpu
+features.graph.ps.secondaryStat rss
+features.graph.ps.cachedOnly true
+
+features.graph.bw.prepopulate true
+features.graph.bw.accounting.show true
+features.graph.bw.accounting.rate 10
+features.graph.bw.accounting.isTimeLong false
+
+# seconds between querying information
+queries.ps.rate 5
+queries.connections.minRate 5
+
+# Thread pool size for hostname resolutions (determining the maximum number of
+# concurrent requests). Upping this to around thirty or so seems to be
+# problematic, causing intermittently seizing.
+
+queries.hostnames.poolSize 5
+
+# Uses python's internal "socket.gethostbyaddr" to resolve addresses rather
+# than the host command. This is ignored if the system's unable to make
+# parallel requests. Resolving this way seems to be much slower than host calls
+# in practice.
+
+queries.hostnames.useSocketModule false
+
+# caching parameters
+cache.sysCalls.size 600
+cache.hostnames.size 700000
+cache.hostnames.trimSize 200000
+cache.armLog.size 1000
+cache.armLog.trimSize 200
+
+# runlevels at which to log arm related events
+log.configEntryNotFound NONE
+log.configEntryUndefined NOTICE
+log.configEntryTypeError NOTICE
+log.torGetInfo DEBUG
+log.torGetConf DEBUG
+log.sysCallMade DEBUG
+log.sysCallCached NONE
+log.sysCallFailed INFO
+log.sysCallCacheGrowing INFO
+log.panelRecreated DEBUG
+log.graph.ps.invalidStat WARN
+log.graph.ps.abandon WARN
+log.graph.bw.prepopulateSuccess NOTICE
+log.graph.bw.prepopulateFailure NOTICE
+log.connLookupFailed INFO
+log.connLookupFailover NOTICE
+log.connLookupAbandon WARN
+log.connLookupRateGrowing NONE
+log.hostnameCacheTrimmed INFO
+log.cursesColorSupport INFO
+

Copied: arm/release/init/project113.py (from rev 22616, arm/trunk/init/project113.py)
===================================================================
--- arm/release/init/project113.py	                        (rev 0)
+++ arm/release/init/project113.py	2010-07-07 16:48:51 UTC (rev 22617)
@@ -0,0 +1,192 @@
+"""
+project113.py
+
+Quick, little script for periodically checking the relay count in the
+consensus. Queries are done every couple hours and this sends an email notice
+if it changes dramatically throughout the week.
+"""
+
+# TODO: this whole script is experimental and should be rewritten once we
+# figure out what works best...
+
+import sys
+import time
+import getpass
+import smtplib
+from email.mime.text import MIMEText
+
+sys.path[0] = sys.path[0][:-5]
+
+import util.torTools
+
+SAMPLING_INTERVAL = 7200 # two hours
+
+USERNAME = ""
+PASSWORD = ""
+RECEIVER = ""
+ALERT_HOURLY_DROP = False # sends alert for hourly network shrinking if true
+
+# size of change (+/-) at which an alert is sent
+BIHOURLY_THRESHOLD = 15
+DAILY_THRESHOLD = 50
+WEEKLY_THRESHOLD = 100
+
+SEEN_FINGERPRINTS = set()
+
+def sendAlert(msg):
+  mimeMsg = MIMEText(msg)
+  mimeMsg['Subject'] = "Tor Relay Threshold Alert"
+  mimeMsg['From'] = USERNAME
+  mimeMsg['To'] = RECEIVER
+  
+  # Send the message via our own SMTP server, but don't include the
+  # envelope header.
+  try:
+    server = smtplib.SMTP('smtp.gmail.com:587')
+    server.starttls()
+    server.login(USERNAME, PASSWORD)
+    
+    server.sendmail(USERNAME, [RECEIVER], mimeMsg.as_string())
+    server.quit()
+  except smtplib.SMTPAuthenticationError:
+    print "Failed to sent alert"
+
+def getCount(conn):
+  nsEntries = conn.get_network_status()
+  return len(nsEntries)
+
+def getExits(conn):
+  # provides ns entries associated with exit relays
+  exitEntries = []
+  for nsEntry in conn.get_network_status():
+    queryParam = "desc/id/%s" % nsEntry.idhex
+    descEntry = conn.get_info(queryParam)[queryParam]
+    
+    isExit = False
+    for line in descEntry.split("\n"):
+      if line == "reject *:*": break # reject all before any accept entries
+      elif line.startswith("accept"):
+        # Guess this to be an exit (erroring on the side of inclusiveness)
+        isExit = True
+        break
+    
+    if isExit: exitEntries.append(nsEntry)
+  
+  return exitEntries
+
+def getNewExits(newEntries):
+  # provides relays that have never been seen before
+  diffMapping = dict([(entry.idhex, entry) for entry in newEntries])
+  
+  for fingerprint in SEEN_FINGERPRINTS:
+    if fingerprint in diffMapping.keys(): del diffMapping[fingerprint]
+  
+  return diffMapping.values()
+
+def getExitsDiff(newEntries, oldEntries):
+  # provides relays in newEntries but not oldEntries
+  diffMapping = dict([(entry.idhex, entry) for entry in newEntries])
+  
+  for entry in oldEntries:
+    if entry.idhex in diffMapping.keys(): del diffMapping[entry.idhex]
+  
+  return diffMapping.values()
+
+if __name__ == '__main__':
+  if not PASSWORD: PASSWORD = getpass.getpass("GMail Password: ")
+  conn = util.torTools.connect()
+  counts = [] # has entries for up to the past week
+  newCounts = [] # parallel listing for new entries added on each time period
+  nsEntries = [] # parallel listing for exiting ns entries
+  lastQuery = 0
+  tick = 0
+  
+  while True:
+    tick += 1
+    
+    # sleep for a couple hours
+    while time.time() < (lastQuery + SAMPLING_INTERVAL):
+      sleepTime = max(1, SAMPLING_INTERVAL - (time.time() - lastQuery))
+      time.sleep(sleepTime)
+    
+    # adds new count to the beginning
+    exitEntries = getExits(conn)
+    newExitEntries = getNewExits(exitEntries)
+    count = len(exitEntries)
+    newCount = len(newExitEntries)
+    
+    counts.insert(0, count)
+    newCounts.insert(0, newCount)
+    nsEntries.insert(0, exitEntries)
+    if len(counts) > 84:
+      counts.pop()
+      newCounts.pop()
+      nsEntries.pop()
+    
+    # check if we broke any thresholds (alert at the lowest increment)
+    alarmHourly, alarmDaily, alarmWeekly = False, False, False
+    
+    if len(counts) >= 2:
+      #if ALERT_HOURLY_DROP: alarmHourly = abs(count - counts[1]) >= BIHOURLY_THRESHOLD
+      #else: alarmHourly = count - counts[1] >= BIHOURLY_THRESHOLD
+      alarmHourly = newCounts >= BIHOURLY_THRESHOLD
+    
+    if len(counts) >= 3:
+      dayMin, dayMax = min(counts[:12]), max(counts[:12])
+      alarmDaily = (dayMax - dayMin) > DAILY_THRESHOLD
+    
+    if len(counts) >= 12:
+      weekMin, weekMax = min(counts), max(counts)
+      alarmWeekly = (weekMax - weekMin) > WEEKLY_THRESHOLD
+    
+    # notes entry on terminal
+    lastQuery = time.time()
+    timeLabel = time.strftime("%H:%M %m/%d/%Y", time.localtime(lastQuery))
+    print "%s - %s exits (%s new)" % (timeLabel, count, newCount)
+    
+    # sends a notice with counts for the last week
+    if tick > 5 and (alarmHourly or alarmDaily or alarmWeekly):
+      if alarmHourly: threshold = "hourly"
+      elif alarmDaily: threshold = "daily"
+      elif alarmWeekly: threshold = "weekly"
+      
+      msg = "%s threshold broken\n" % threshold
+      
+      msg += "\nexit counts:\n"
+      entryTime = lastQuery
+      for i in range(len(counts)):
+        countEntry, newCountEntry = counts[i], newCounts[i]
+        timeLabel = time.strftime("%H:%M %m/%d/%Y", time.localtime(entryTime))
+        msg += "%s - %i (%i new)\n" % (timeLabel, countEntry, newCountEntry)
+        entryTime -= SAMPLING_INTERVAL
+      
+      msg += "\nnew exits (hourly):\n"
+      for entry in newExitEntries:
+        msg += "%s (%s:%s)\n" % (entry.idhex, entry.ip, entry.orport)
+        msg += "    nickname: %s\n    flags: %s\n\n" % (entry.nickname, ", ".join(entry.flags))
+      
+      if len(counts) >= 12:
+        msg += "\nnew exits (daily):\n"
+        entriesDiff = getExitsDiff(nsEntries[0], nsEntries[12])
+        for entry in entriesDiff:
+          msg += "%s (%s:%s)\n" % (entry.idhex, entry.ip, entry.orport)
+          msg += "    nickname: %s\n    flags: %s\n\n" % (entry.nickname, ", ".join(entry.flags))
+      
+      if len(counts) >= 48:
+        # require at least four days of data
+        msg += "\nnew exits (weekly):\n"
+        entriesDiff = getExitsDiff(nsEntries[0], nsEntries[-1])
+        for entry in entriesDiff:
+          msg += "%s (%s:%s)\n" % (entry.idhex, entry.ip, entry.orport)
+          msg += "    nickname: %s\n    flags: %s\n\n" % (entry.nickname, ", ".join(entry.flags))
+      
+      sendAlert(msg)
+      
+      # add all new fingerprints to seen set
+      for entry in nsEntries[0]:
+        SEEN_FINGERPRINTS.add(entry.idhex)
+      
+      # clears entries so we don't repeatidly send alarms for the same event
+      if alarmDaily: del counts[2:]
+      elif alarmWeekly: del counts[12:]
+

Modified: arm/release/init/starter.py
===================================================================
--- arm/release/init/starter.py	2010-07-07 16:44:54 UTC (rev 22616)
+++ arm/release/init/starter.py	2010-07-07 16:48:51 UTC (rev 22617)
@@ -6,31 +6,43 @@
 command line parameters.
 """
 
+import os
 import sys
-import socket
 import getopt
-import getpass
 
 # includes parent directory rather than init in path (so sibling modules are included)
 sys.path[0] = sys.path[0][:-5]
 
-from TorCtl import TorCtl, TorUtil
-from interface import controller, logPanel
+import interface.controller
+import interface.logPanel
+import util.conf
+import util.connections
+import util.hostnames
+import util.log
+import util.panel
+import util.sysTools
+import util.torTools
+import util.uiTools
+import TorCtl.TorUtil
 
-VERSION = "1.3.5"
-LAST_MODIFIED = "Apr 8, 2010"
+VERSION = "1.3.6"
+LAST_MODIFIED = "July 7, 2010"
 
-DEFAULT_CONTROL_ADDR = "127.0.0.1"
-DEFAULT_CONTROL_PORT = 9051
-DEFAULT_LOGGED_EVENTS = "N3" # tor and arm NOTICE, WARN, and ERR events
+DEFAULT_CONFIG = os.path.expanduser("~/.armrc")
+DEFAULTS = {"startup.controlPassword": None,
+            "startup.interface.ipAddress": "127.0.0.1",
+            "startup.interface.port": 9051,
+            "startup.blindModeEnabled": False,
+            "startup.events": "N3"}
 
-OPT = "i:p:be:vh"
-OPT_EXPANDED = ["interface=", "password=", "blind", "event=", "version", "help"]
+OPT = "i:c:be:vh"
+OPT_EXPANDED = ["interface=", "config=", "blind", "event=", "version", "help"]
 HELP_MSG = """Usage arm [OPTION]
 Terminal status monitor for Tor relays.
 
   -i, --interface [ADDRESS:]PORT  change control interface from %s:%i
-  -p, --password PASSWORD         authenticate using password (skip prompt)
+  -c, --config CONFIG_PATH        loaded configuration options, CONFIG_PATH
+                                    defaults to: %s
   -b, --blind                     disable connection lookups
   -e, --event EVENT_FLAGS         event types in message log  (default: %s)
 %s
@@ -39,8 +51,8 @@
 
 Example:
 arm -b -i 1643          hide connection data, attaching to control port 1643
-arm -e we -p nemesis    use password 'nemesis' with 'WARN'/'ERR' events
-""" % (DEFAULT_CONTROL_ADDR, DEFAULT_CONTROL_PORT, DEFAULT_LOGGED_EVENTS, logPanel.EVENT_LISTING)
+arm -e we -c /tmp/cfg   use this configuration file with 'WARN'/'ERR' events
+""" % (DEFAULTS["startup.interface.ipAddress"], DEFAULTS["startup.interface.port"], DEFAULT_CONFIG, DEFAULTS["startup.events"], interface.logPanel.EVENT_LISTING)
 
 def isValidIpAddr(ipStr):
   """
@@ -66,11 +78,8 @@
   return True
 
 if __name__ == '__main__':
-  controlAddr = DEFAULT_CONTROL_ADDR     # controller interface IP address
-  controlPort = DEFAULT_CONTROL_PORT     # controller interface port
-  authPassword = ""                      # authentication password (prompts if unset and needed)
-  isBlindMode = False                    # allows connection lookups to be disabled
-  loggedEvents = DEFAULT_LOGGED_EVENTS   # flags for event types in message log
+  param = dict([(key, None) for key in DEFAULTS.keys()])
+  configPath = DEFAULT_CONFIG            # path used for customized configuration
   
   # parses user input, noting any issues
   try:
@@ -82,29 +91,26 @@
   for opt, arg in opts:
     if opt in ("-i", "--interface"):
       # defines control interface address/port
+      controlAddr, controlPort = None, None
+      divIndex = arg.find(":")
+      
       try:
-        divIndex = arg.find(":")
-        
         if divIndex == -1:
           controlPort = int(arg)
         else:
           controlAddr = arg[0:divIndex]
           controlPort = int(arg[divIndex + 1:])
-        
-        # validates that input is a valid ip address and port
-        if divIndex != -1 and not isValidIpAddr(controlAddr):
-          raise AssertionError("'%s' isn't a valid IP address" % controlAddr)
-        elif controlPort < 0 or controlPort > 65535:
-          raise AssertionError("'%s' isn't a valid port number (ports range 0-65535)" % controlPort)
       except ValueError:
         print "'%s' isn't a valid port number" % arg
         sys.exit()
-      except AssertionError, exc:
-        print exc
-        sys.exit()
-    elif opt in ("-p", "--password"): authPassword = arg    # sets authentication password
-    elif opt in ("-b", "--blind"): isBlindMode = True       # prevents connection lookups
-    elif opt in ("-e", "--event"): loggedEvents = arg       # set event flags
+      
+      param["startup.interface.ipAddress"] = controlAddr
+      param["startup.interface.port"] = controlPort
+    elif opt in ("-c", "--config"): configPath = arg  # sets path of user's config
+    elif opt in ("-b", "--blind"):
+      param["startup.blindModeEnabled"] = True        # prevents connection lookups
+    elif opt in ("-e", "--event"):
+      param["startup.events"] = arg                   # set event flags
     elif opt in ("-v", "--version"):
       print "arm version %s (released %s)\n" % (VERSION, LAST_MODIFIED)
       sys.exit()
@@ -112,85 +118,63 @@
       print HELP_MSG
       sys.exit()
   
+  # attempts to load user's custom configuration
+  config = util.conf.getConfig("arm")
+  config.path = configPath
+  
+  if os.path.exists(configPath):
+    try:
+      config.load()
+      
+      # revises defaults to match user's configuration
+      config.update(DEFAULTS)
+      
+      # loads user preferences for utilities
+      for utilModule in (util.conf, util.connections, util.hostnames, util.log, util.panel, util.sysTools, util.torTools, util.uiTools):
+        utilModule.loadConfig(config)
+    except IOError, exc:
+      msg = "Failed to load configuration (using defaults): \"%s\"" % str(exc)
+      util.log.log(util.log.WARN, msg)
+  else:
+    msg = "No configuration found at '%s', using defaults" % configPath
+    util.log.log(util.log.NOTICE, msg)
+  
+  # overwrites undefined parameters with defaults
+  for key in param.keys():
+    if param[key] == None: param[key] = DEFAULTS[key]
+  
+  # validates that input has a valid ip address and port
+  controlAddr = param["startup.interface.ipAddress"]
+  controlPort = param["startup.interface.port"]
+  
+  if not isValidIpAddr(controlAddr):
+    print "'%s' isn't a valid IP address" % controlAddr
+    sys.exit()
+  elif controlPort < 0 or controlPort > 65535:
+    print "'%s' isn't a valid port number (ports range 0-65535)" % controlPort
+    sys.exit()
+  
   # validates and expands log event flags
   try:
-    expandedEvents = logPanel.expandEvents(loggedEvents)
+    expandedEvents = interface.logPanel.expandEvents(param["startup.events"])
   except ValueError, exc:
     for flag in str(exc):
       print "Unrecognized event flag: %s" % flag
     sys.exit()
   
   # temporarily disables TorCtl logging to prevent issues from going to stdout while starting
-  TorUtil.loglevel = "NONE"
+  TorCtl.TorUtil.loglevel = "NONE"
   
-  # attempts to open a socket to the tor server
-  try:
-    s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
-    s.connect((controlAddr, controlPort))
-    conn = TorCtl.Connection(s)
-  except socket.error, exc:
-    if str(exc) == "[Errno 111] Connection refused":
-      # most common case - tor control port isn't available
-      print "Connection refused. Is the ControlPort enabled?"
-    else:
-      # less common issue - provide exc message
-      print "Failed to establish socket: %s" % exc
-    
-    sys.exit()
+  # sets up TorCtl connection, prompting for the passphrase if necessary and
+  # sending problems to stdout if they arise
+  util.torTools.INCORRECT_PASSWORD_MSG = "Controller password found in '%s' was incorrect" % configPath
+  authPassword = config.get("startup.controlPassword", DEFAULTS["startup.controlPassword"])
+  conn = util.torTools.connect(controlAddr, controlPort, authPassword)
+  if conn == None: sys.exit(1)
   
-  # check PROTOCOLINFO for authentication type
-  try:
-    authInfo = conn.sendAndRecv("PROTOCOLINFO\r\n")[1][1]
-  except TorCtl.ErrorReply, exc:
-    print "Unable to query PROTOCOLINFO for authentication type: %s" % exc
-    sys.exit()
+  controller = util.torTools.getConn()
+  controller.init(conn)
   
-  try:
-    if authInfo.startswith("AUTH METHODS=NULL"):
-      # no authentication required
-      conn.authenticate("")
-    elif authInfo.startswith("AUTH METHODS=HASHEDPASSWORD"):
-      # password authentication, promts for password if it wasn't provided
-      try:
-        if not authPassword: authPassword = getpass.getpass()
-      except KeyboardInterrupt:
-        sys.exit()
-      
-      conn.authenticate(authPassword)
-    elif authInfo.startswith("AUTH METHODS=COOKIE"):
-      # cookie authtication, parses path to authentication cookie
-      start = authInfo.find("COOKIEFILE=\"") + 12
-      end = authInfo[start:].find("\"")
-      authCookiePath = authInfo[start:start + end]
-      
-      try:
-        authCookie = open(authCookiePath, "r")
-        conn.authenticate_cookie(authCookie)
-        authCookie.close()
-      except IOError, exc:
-        # cleaner message for common errors
-        issue = None
-        if str(exc).startswith("[Errno 13] Permission denied"): issue = "permission denied"
-        elif str(exc).startswith("[Errno 2] No such file or directory"): issue = "file doesn't exist"
-        
-        # if problem's recognized give concise message, otherwise print exception string
-        if issue: print "Failed to read authentication cookie (%s): %s" % (issue, authCookiePath)
-        else: print "Failed to read authentication cookie: %s" % exc
-        
-        sys.exit()
-    else:
-      # authentication type unrecognized (probably a new addition to the controlSpec)
-      print "Unrecognized authentication type: %s" % authInfo
-      sys.exit()
-  except TorCtl.ErrorReply, exc:
-    # authentication failed
-    issue = str(exc)
-    if str(exc).startswith("515 Authentication failed: Password did not match"): issue = "password incorrect"
-    if str(exc) == "515 Authentication failed: Wrong length on authentication cookie.": issue = "cookie value incorrect"
-    
-    print "Unable to authenticate: %s" % issue
-    sys.exit()
-  
-  controller.startTorMonitor(conn, expandedEvents, isBlindMode)
+  interface.controller.startTorMonitor(expandedEvents, param["startup.blindModeEnabled"])
   conn.close()
 

Modified: arm/release/interface/__init__.py
===================================================================
--- arm/release/interface/__init__.py	2010-07-07 16:44:54 UTC (rev 22616)
+++ arm/release/interface/__init__.py	2010-07-07 16:48:51 UTC (rev 22617)
@@ -2,5 +2,5 @@
 Panels, popups, and handlers comprising the arm user interface.
 """
 
-__all__ = ["bandwidthMonitor", "confPanel", "connCountMonitor", "connPanel", "controller", "cpuMemMonitor", "descriptorPopup", "fileDescriptorPopup", "graphPanel", "headerPanel", "logPanel"]
+__all__ = ["confPanel", "connPanel", "controller", "descriptorPopup", "fileDescriptorPopup", "headerPanel", "logPanel"]
 

Deleted: arm/release/interface/bandwidthMonitor.py
===================================================================
--- arm/release/interface/bandwidthMonitor.py	2010-07-07 16:44:54 UTC (rev 22616)
+++ arm/release/interface/bandwidthMonitor.py	2010-07-07 16:48:51 UTC (rev 22617)
@@ -1,185 +0,0 @@
-#!/usr/bin/env python
-# bandwidthMonitor.py -- Tracks stats concerning bandwidth usage.
-# Released under the GPL v3 (http://www.gnu.org/licenses/gpl.html)
-
-import time
-import socket
-from TorCtl import TorCtl
-
-import graphPanel
-from util import uiTools
-
-DL_COLOR = "green"  # download section color
-UL_COLOR = "cyan"   # upload section color
-
-# width at which panel abandons placing optional stats (avg and total) with
-# header in favor of replacing the x-axis label
-COLLAPSE_WIDTH = 135
-
-class BandwidthMonitor(graphPanel.GraphStats, TorCtl.PostEventListener):
-  """
-  Tor event listener, taking bandwidth sampling to draw a bar graph. This is
-  updated every second by the BW events.
-  """
-  
-  def __init__(self, conn):
-    graphPanel.GraphStats.__init__(self)
-    TorCtl.PostEventListener.__init__(self)
-    self.conn = conn              # Tor control port connection
-    self.accountingInfo = None    # accounting data (set by _updateAccountingInfo method)
-    
-    # dummy values for static data
-    self.isAccounting = False
-    self.bwRate, self.bwBurst = None, None
-    self.resetOptions()
-  
-  def resetOptions(self):
-    """
-    Checks with tor for static bandwidth parameters (rates, accounting
-    information, etc).
-    """
-    
-    try:
-      if not self.conn: raise ValueError
-      self.isAccounting = self.conn.get_info('accounting/enabled')['accounting/enabled'] == '1'
-      
-      # static limit stats for label, uses relay stats if defined (internal behavior of tor)
-      bwStats = self.conn.get_option(['BandwidthRate', 'BandwidthBurst'])
-      relayStats = self.conn.get_option(['RelayBandwidthRate', 'RelayBandwidthBurst'])
-      
-      self.bwRate = uiTools.getSizeLabel(int(bwStats[0][1] if relayStats[0][1] == "0" else relayStats[0][1]), 1)
-      self.bwBurst = uiTools.getSizeLabel(int(bwStats[1][1] if relayStats[1][1] == "0" else relayStats[1][1]), 1)
-      
-      # if both are using rounded values then strip off the ".0" decimal
-      if ".0" in self.bwRate and ".0" in self.bwBurst:
-        self.bwRate = self.bwRate.replace(".0", "")
-        self.bwBurst = self.bwBurst.replace(".0", "")
-      
-    except (ValueError, socket.error, TorCtl.ErrorReply, TorCtl.TorCtlClosed):
-      pass # keep old values
-    
-    # this doesn't track accounting stats when paused so doesn't need a custom pauseBuffer
-    contentHeight = 13 if self.isAccounting else 10
-    graphPanel.GraphStats.initialize(self, DL_COLOR, UL_COLOR, contentHeight)
-  
-  def bandwidth_event(self, event):
-    self._processEvent(event.read / 1024.0, event.written / 1024.0)
-  
-  def draw(self, panel):
-    # if display is narrow, overwrites x-axis labels with avg / total stats
-    if panel.maxX <= COLLAPSE_WIDTH:
-      # clears line
-      panel.addstr(8, 0, " " * 200)
-      graphCol = min((panel.maxX - 10) / 2, graphPanel.MAX_GRAPH_COL)
-      
-      primaryFooter = "%s, %s" % (self._getAvgLabel(True), self._getTotalLabel(True))
-      secondaryFooter = "%s, %s" % (self._getAvgLabel(False), self._getTotalLabel(False))
-      
-      panel.addstr(8, 1, primaryFooter, uiTools.getColor(self.primaryColor))
-      panel.addstr(8, graphCol + 6, secondaryFooter, uiTools.getColor(self.secondaryColor))
-    
-    # provides accounting stats if enabled
-    if self.isAccounting:
-      if not self.isPaused: self._updateAccountingInfo()
-      
-      if self.accountingInfo:
-        status = self.accountingInfo["status"]
-        hibernateColor = "green"
-        if status == "soft": hibernateColor = "yellow"
-        elif status == "hard": hibernateColor = "red"
-        
-        panel.addfstr(10, 0, "<b>Accounting (<%s>%s</%s>)</b>" % (hibernateColor, status, hibernateColor))
-        panel.addstr(10, 35, "Time to reset: %s" % self.accountingInfo["resetTime"])
-        panel.addstr(11, 2, "%s / %s" % (self.accountingInfo["read"], self.accountingInfo["readLimit"]), uiTools.getColor(self.primaryColor))
-        panel.addstr(11, 37, "%s / %s" % (self.accountingInfo["written"], self.accountingInfo["writtenLimit"]), uiTools.getColor(self.secondaryColor))
-      else:
-        panel.addfstr(10, 0, "<b>Accounting:</b> Connection Closed...")
-  
-  def getTitle(self, width):
-    # provides label, dropping stats if there's not enough room
-    capLabel = "cap: %s" % self.bwRate if self.bwRate else ""
-    burstLabel = "burst: %s" % self.bwBurst if self.bwBurst else ""
-    
-    if capLabel and burstLabel:
-      bwLabel = " (%s, %s)" % (capLabel, burstLabel)
-    elif capLabel or burstLabel:
-      # only one is set - use whatever's avaialble
-      bwLabel = " (%s%s)" % (capLabel, burstLabel)
-    else:
-      bwLabel = ""
-    
-    labelContents = "Bandwidth%s:" % bwLabel
-    if width < len(labelContents):
-      labelContents = "%s):" % labelContents[:labelContents.find(",")]  # removes burst measure
-      if width < len(labelContents): labelContents = "Bandwidth:"       # removes both
-    
-    return labelContents
-  
-  def getHeaderLabel(self, width, isPrimary):
-    graphType = "Downloaded" if isPrimary else "Uploaded"
-    stats = [""]
-    
-    # conditional is to avoid flickering as stats change size for tty terminals
-    if width * 2 > COLLAPSE_WIDTH:
-      stats = [""] * 3
-      stats[1] = "- %s" % self._getAvgLabel(isPrimary)
-      stats[2] = ", %s" % self._getTotalLabel(isPrimary)
-    
-    stats[0] = "%-14s" % ("%s/sec" % uiTools.getSizeLabel((self.lastPrimary if isPrimary else self.lastSecondary) * 1024, 1))
-    
-    labeling = graphType + " (" + "".join(stats).strip() + "):"
-    while (len(labeling) >= width):
-      if len(stats) > 1:
-        del stats[-1]
-        labeling = graphType + " (" + "".join(stats).strip() + "):"
-      else:
-        labeling = graphType + ":"
-        break
-    
-    return labeling
-  
-  def _getAvgLabel(self, isPrimary):
-    total = self.primaryTotal if isPrimary else self.secondaryTotal
-    return "avg: %s/sec" % uiTools.getSizeLabel((total / max(1, self.tick)) * 1024, 1)
-  
-  def _getTotalLabel(self, isPrimary):
-    total = self.primaryTotal if isPrimary else self.secondaryTotal
-    return "total: %s" % uiTools.getSizeLabel(total * 1024, 1)
-  
-  def _updateAccountingInfo(self):
-    """
-    Updates mapping used for accounting info. This includes the following keys:
-    status, resetTime, read, written, readLimit, writtenLimit
-    
-    Sets mapping to None if the Tor connection is closed.
-    """
-    
-    try:
-      self.accountingInfo = {}
-      
-      accountingParams = self.conn.get_info(["accounting/hibernating", "accounting/bytes", "accounting/bytes-left", "accounting/interval-end"])
-      self.accountingInfo["status"] = accountingParams["accounting/hibernating"]
-      
-      # converts from gmt to local with respect to DST
-      if time.localtime()[8]: tz_offset = time.altzone
-      else: tz_offset = time.timezone
-      
-      sec = time.mktime(time.strptime(accountingParams["accounting/interval-end"], "%Y-%m-%d %H:%M:%S")) - time.time() - tz_offset
-      resetHours = sec / 3600
-      sec %= 3600
-      resetMin = sec / 60
-      sec %= 60
-      self.accountingInfo["resetTime"] = "%i:%02i:%02i" % (resetHours, resetMin, sec)
-      
-      read = int(accountingParams["accounting/bytes"].split(" ")[0])
-      written = int(accountingParams["accounting/bytes"].split(" ")[1])
-      readLeft = int(accountingParams["accounting/bytes-left"].split(" ")[0])
-      writtenLeft = int(accountingParams["accounting/bytes-left"].split(" ")[1])
-      
-      self.accountingInfo["read"] = uiTools.getSizeLabel(read)
-      self.accountingInfo["written"] = uiTools.getSizeLabel(written)
-      self.accountingInfo["readLimit"] = uiTools.getSizeLabel(read + readLeft)
-      self.accountingInfo["writtenLimit"] = uiTools.getSizeLabel(written + writtenLeft)
-    except (socket.error, TorCtl.ErrorReply, TorCtl.TorCtlClosed):
-      self.accountingInfo = None
-

Modified: arm/release/interface/confPanel.py
===================================================================
--- arm/release/interface/confPanel.py	2010-07-07 16:44:54 UTC (rev 22616)
+++ arm/release/interface/confPanel.py	2010-07-07 16:48:51 UTC (rev 22617)
@@ -37,7 +37,7 @@
   """
   
   def __init__(self, stdscr, confLocation, conn):
-    panel.Panel.__init__(self, stdscr, 0)
+    panel.Panel.__init__(self, stdscr, "conf", 0)
     self.confLocation = confLocation
     self.showLineNum = True
     self.stripComments = False
@@ -176,7 +176,7 @@
     elif key == ord('s') or key == ord('S'):
       self.stripComments = not self.stripComments
       self.scroll = 0
-    self.redraw()
+    self.redraw(True)
   
   def draw(self, subwindow, width, height):
     self.addstr(0, 0, "Tor Config (%s):" % self.confLocation, curses.A_STANDOUT)

Deleted: arm/release/interface/connCountMonitor.py
===================================================================
--- arm/release/interface/connCountMonitor.py	2010-07-07 16:44:54 UTC (rev 22616)
+++ arm/release/interface/connCountMonitor.py	2010-07-07 16:48:51 UTC (rev 22617)
@@ -1,57 +0,0 @@
-#!/usr/bin/env python
-# connCountMonitor.py -- Tracks the number of connections made by Tor.
-# Released under the GPL v3 (http://www.gnu.org/licenses/gpl.html)
-
-import socket
-from TorCtl import TorCtl
-
-import graphPanel
-from util import connections
-
-class ConnCountMonitor(graphPanel.GraphStats, TorCtl.PostEventListener):
-  """
-  Tracks number of connections, counting client and directory connections as 
-  outbound.
-  """
-  
-  def __init__(self, conn):
-    graphPanel.GraphStats.__init__(self)
-    TorCtl.PostEventListener.__init__(self)
-    graphPanel.GraphStats.initialize(self, "green", "cyan", 10)
-    
-    self.orPort = "0"
-    self.dirPort = "0"
-    self.controlPort = "0"
-    self.resetOptions(conn)
-  
-  def resetOptions(self, conn):
-    try:
-      self.orPort = conn.get_option("ORPort")[0][1]
-      self.dirPort = conn.get_option("DirPort")[0][1]
-      self.controlPort = conn.get_option("ControlPort")[0][1]
-    except (socket.error, TorCtl.ErrorReply, TorCtl.TorCtlClosed):
-      self.orPort = "0"
-      self.dirPort = "0"
-      self.controlPort = "0"
-  
-  def bandwidth_event(self, event):
-    # doesn't use events but this keeps it in sync with the bandwidth panel
-    # (and so it stops if Tor stops - used to use a separate thread but this
-    # is better)
-    inbound, outbound, control = 0, 0, 0
-    
-    for lIp, lPort, fIp, fPort in connections.getResolver("tor").getConnections():
-      if lPort in (self.orPort, self.dirPort): inbound += 1
-      elif lPort == self.controlPort: control += 1
-      else: outbound += 1
-    
-    self._processEvent(inbound, outbound)
-  
-  def getTitle(self, width):
-    return "Connection Count:"
-  
-  def getHeaderLabel(self, width, isPrimary):
-    avg = (self.primaryTotal if isPrimary else self.secondaryTotal) / max(1, self.tick)
-    if isPrimary: return "Inbound (%s, avg: %s):" % (self.lastPrimary, avg)
-    else: return "Outbound (%s, avg: %s):" % (self.lastSecondary, avg)
-

Modified: arm/release/interface/connPanel.py
===================================================================
--- arm/release/interface/connPanel.py	2010-07-07 16:44:54 UTC (rev 22616)
+++ arm/release/interface/connPanel.py	2010-07-07 16:48:51 UTC (rev 22617)
@@ -109,9 +109,9 @@
   Lists tor related connection data.
   """
   
-  def __init__(self, stdscr, conn):
+  def __init__(self, stdscr, conn, isDisabled):
     TorCtl.PostEventListener.__init__(self)
-    panel.Panel.__init__(self, stdscr, 0)
+    panel.Panel.__init__(self, stdscr, "conn", 0)
     self.scroll = 0
     self.conn = conn                  # tor connection for querrying country codes
     self.listingType = LIST_IP        # information used in listing entries
@@ -129,7 +129,7 @@
     self.orconnStatusCacheValid = False   # indicates if cache has been invalidated
     self.clientConnectionCache = None     # listing of nicknames for our client connections
     self.clientConnectionLock = RLock()   # lock for clientConnectionCache
-    self.isDisabled = False               # prevent panel from updating entirely
+    self.isDisabled = isDisabled          # prevent panel from updating entirely
     self.lastConnResults = None           # used to check if connection results have changed
     
     self.isCursorEnabled = True
@@ -216,6 +216,7 @@
     self.clientConnectionLock.release()
   
   # when consensus changes update fingerprint mappings
+  # TODO: should also be taking NS events into account
   def new_consensus_event(self, event):
     self.orconnStatusCacheValid = False
     self.fingerprintLookupCache.clear()
@@ -267,6 +268,8 @@
     Reloads connection results.
     """
     
+    if self.isDisabled: return
+    
     # inaccessable during startup so might need to be refetched
     try:
       if not self.address: self.address = self.conn.get_info("address")["address"]
@@ -450,7 +453,7 @@
       if not self.allowDNS: hostnames.setPaused(True)
       elif self.listingType == LIST_HOSTNAME: hostnames.setPaused(False)
     else: return # skip following redraw
-    self.redraw()
+    self.redraw(True)
   
   def draw(self, subwindow, width, height):
     self.connectionsLock.acquire()

Modified: arm/release/interface/controller.py
===================================================================
--- arm/release/interface/controller.py	2010-07-07 16:44:54 UTC (rev 22616)
+++ arm/release/interface/controller.py	2010-07-07 16:48:51 UTC (rev 22617)
@@ -7,7 +7,6 @@
 """
 
 import re
-import os
 import math
 import time
 import curses
@@ -16,17 +15,17 @@
 from TorCtl import TorUtil
 
 import headerPanel
-import graphPanel
+import graphing.graphPanel
 import logPanel
 import connPanel
 import confPanel
 import descriptorPopup
 import fileDescriptorPopup
 
-from util import log, connections, hostnames, panel, uiTools
-import bandwidthMonitor
-import cpuMemMonitor
-import connCountMonitor
+from util import conf, log, connections, hostnames, panel, sysTools, torTools, uiTools
+import graphing.bandwidthStats
+import graphing.connStats
+import graphing.psStats
 
 CONFIRM_QUIT = True
 REFRESH_RATE = 5        # seconds between redrawing screen
@@ -43,14 +42,13 @@
   ["torrc"]]
 PAUSEABLE = ["header", "graph", "log", "conn"]
 
-# events needed for panels other than the event log
-REQ_EVENTS = ["BW", "NEWDESC", "NEWCONSENSUS", "CIRC"]
+CONFIG = {"features.graph.type": 1, "features.graph.bw.prepopulate": True, "log.configEntryUndefined": log.NOTICE}
 
 class ControlPanel(panel.Panel):
   """ Draws single line label for interface controls. """
   
   def __init__(self, stdscr, isBlindMode):
-    panel.Panel.__init__(self, stdscr, 0, 1)
+    panel.Panel.__init__(self, stdscr, "control", 0, 1)
     self.msgText = CTL_HELP           # message text to be displyed
     self.msgAttr = curses.A_NORMAL    # formatting attributes
     self.page = 1                     # page number currently being displayed
@@ -126,7 +124,7 @@
   """
   
   def __init__(self, stdscr, height):
-    panel.Panel.__init__(self, stdscr, 0, height)
+    panel.Panel.__init__(self, stdscr, "popup", 0, height)
   
   # The following methods are to emulate old panel functionality (this was the
   # only implementations to use these methods and will require a complete
@@ -256,55 +254,64 @@
   
   return selection
 
-def setEventListening(loggedEvents, conn, isBlindMode):
-  """
-  Tries to set events being listened for, displaying error for any event
-  types that aren't supported (possibly due to version issues). This returns 
-  a list of event types that were successfully set.
-  """
-  eventsSet = False
+def setEventListening(selectedEvents, isBlindMode):
+  # creates a local copy, note that a suspected python bug causes *very*
+  # puzzling results otherwise when trying to discard entries (silently
+  # returning out of this function!)
+  events = set(selectedEvents)
   
-  # adds events used for panels to function if not already included
-  connEvents = loggedEvents.union(set(REQ_EVENTS))
-  
   # removes special types only used in arm (UNKNOWN, TORCTL, ARM_DEBUG, etc)
   toDiscard = []
-  for event in connEvents:
-    if event not in logPanel.TOR_EVENT_TYPES.values(): toDiscard += [event]
+  for eventType in events:
+    if eventType not in logPanel.TOR_EVENT_TYPES.values(): toDiscard += [eventType]
   
-  for event in toDiscard: connEvents.discard(event)
+  for eventType in list(toDiscard):
+    events.discard(eventType)
   
-  while not eventsSet:
-    try:
-      conn.set_events(connEvents)
-      eventsSet = True
-    except TorCtl.ErrorReply, exc:
-      msg = str(exc)
-      if "Unrecognized event" in msg:
-        # figure out type of event we failed to listen for
-        start = msg.find("event \"") + 7
-        end = msg.rfind("\"")
-        eventType = msg[start:end]
-        
-        # removes and notes problem
-        connEvents.discard(eventType)
-        if eventType in loggedEvents: loggedEvents.remove(eventType)
-        
-        if eventType in REQ_EVENTS:
-          if eventType == "BW": msg = "(bandwidth graph won't function)"
-          elif eventType in ("NEWDESC", "NEWCONSENSUS") and not isBlindMode: msg = "(connections listing can't register consensus changes)"
-          else: msg = ""
-          log.log(log.ERR, "Unsupported event type: %s %s" % (eventType, msg))
-        else: log.log(log.WARN, "Unsupported event type: %s" % eventType)
-    except TorCtl.TorCtlClosed:
-      return []
+  # makes a mapping instead
+  events = dict([(eventType, None) for eventType in events])
   
-  loggedEvents = list(loggedEvents)
-  loggedEvents.sort() # alphabetizes
-  return loggedEvents
+  # add mandatory events (those needed for arm functionaity)
+  reqEvents = {"BW": "(bandwidth graph won't function)",
+               "NEWDESC": "(information related to descriptors will grow stale)",
+               "NS": "(information related to the consensus will grow stale)",
+               "NEWCONSENSUS": "(information related to the consensus will grow stale)"}
+  
+  if not isBlindMode:
+    reqEvents["CIRC"] = "(may cause issues in identifying client connections)"
+  
+  for eventType, msg in reqEvents.items():
+    events[eventType] = (log.ERR, "Unsupported event type: %s %s" % (eventType, msg))
+  
+  setEvents = torTools.getConn().setControllerEvents(events)
+  
+  # temporary hack for providing user selected events minus those that failed
+  # (wouldn't be a problem if I wasn't storing tor and non-tor events together...)
+  returnVal = list(selectedEvents.difference(torTools.FAILED_EVENTS))
+  returnVal.sort() # alphabetizes
+  return returnVal
 
-def drawTorMonitor(stdscr, conn, loggedEvents, isBlindMode):
+def connResetListener(conn, eventType):
   """
+  Pauses connection resolution when tor's shut down, and resumes if started
+  again.
+  """
+  
+  if connections.isResolverAlive("tor"):
+    resolver = connections.getResolver("tor")
+    resolver.setPaused(eventType == torTools.TOR_CLOSED)
+
+def selectiveRefresh(panels, page):
+  """
+  This forces a redraw of content on the currently active page (should be done
+  after changing pages, popups, or anything else that overwrites panels).
+  """
+  
+  for panelKey in PAGES[page]:
+    panels[panelKey].redraw(True)
+
+def drawTorMonitor(stdscr, loggedEvents, isBlindMode):
+  """
   Starts arm interface reflecting information on provided control port.
   
   stdscr - curses window
@@ -313,6 +320,17 @@
     otherwise unrecognized events)
   """
   
+  # loads config for various interface components
+  config = conf.getConfig("arm")
+  config.update(CONFIG)
+  config.update(graphing.graphPanel.CONFIG)
+  
+  # pauses/unpauses connection resolution according to if tor's connected or not
+  torTools.getConn().addStatusListener(connResetListener)
+  
+  # TODO: incrementally drop this requirement until everything's using the singleton
+  conn = torTools.getConn().getTorCtl()
+  
   curses.halfdelay(REFRESH_RATE * 10)   # uses getch call as timer for REFRESH_RATE seconds
   try: curses.use_default_colors()      # allows things like semi-transparent backgrounds (call can fail with ERR)
   except curses.error: pass
@@ -321,63 +339,32 @@
   try: curses.curs_set(0)
   except curses.error: pass
   
-  # gets pid of tor instance with control port open
-  torPid = None       # None if couldn't be resolved (provides error later)
+  # attempts to determine tor's current pid (left as None if unresolveable, logging an error later)
+  torPid = torTools.getConn().getMyPid()
   
-  pidOfCall = os.popen("pidof tor 2> /dev/null")
   try:
-    # gets pid if there's only one possability
-    results = pidOfCall.readlines()
-    if len(results) == 1 and len(results[0].split()) == 1: torPid = results[0].strip()
-  except IOError: pass # pid call failed
-  pidOfCall.close()
-  
-  if not torPid:
-    try:
-      # uses netstat to identify process with open control port (might not
-      # work if tor's being run as a different user due to permissions)
-      netstatCall = os.popen("netstat -npl 2> /dev/null | grep 127.0.0.1:%s 2> /dev/null" % conn.get_option("ControlPort")[0][1])
-      results = netstatCall.readlines()
-      
-      if len(results) == 1:
-        results = results[0].split()[6] # process field (ex. "7184/tor")
-        torPid = results[:results.find("/")]
-    except (IOError, socket.error, TorCtl.ErrorReply, TorCtl.TorCtlClosed): pass # netstat or control port calls failed
-    netstatCall.close()
-  
-  if not torPid:
-    try:
-      # third try, use ps if there's only one possability
-      psCall = os.popen("ps -o pid -C tor 2> /dev/null")
-      results = psCall.readlines()
-      if len(results) == 2 and len(results[0].split()) == 1: torPid = results[1].strip()
-    except IOError: pass # ps call failed
-    psCall.close()
-  
-  try:
     confLocation = conn.get_info("config-file")["config-file"]
     if confLocation[0] != "/":
       # relative path - attempt to add process pwd
       try:
-        pwdxCall = os.popen("pwdx %s 2> /dev/null" % torPid)
-        results = pwdxCall.readlines()
+        results = sysTools.call("pwdx %s" % torPid)
         if len(results) == 1 and len(results[0].split()) == 2: confLocation = "%s/%s" % (results[0].split()[1], confLocation)
       except IOError: pass # pwdx call failed
-      pwdxCall.close()
   except (socket.error, TorCtl.ErrorReply, TorCtl.TorCtlClosed):
     confLocation = ""
   
   # minor refinements for connection resolver
-  resolver = connections.getResolver("tor")
-  if torPid: resolver.processPid = torPid # helps narrow connection results
+  if not isBlindMode:
+    resolver = connections.getResolver("tor")
+    if torPid: resolver.processPid = torPid # helps narrow connection results
   
   # hack to display a better (arm specific) notice if all resolvers fail
   connections.RESOLVER_FINAL_FAILURE_MSG += " (connection related portions of the monitor won't function)"
   
   panels = {
-    "header": headerPanel.HeaderPanel(stdscr, conn, torPid),
+    "header": headerPanel.HeaderPanel(stdscr, config),
     "popup": Popup(stdscr, 9),
-    "graph": graphPanel.GraphPanel(stdscr),
+    "graph": graphing.graphPanel.GraphPanel(stdscr),
     "log": logPanel.LogMonitor(stdscr, conn, loggedEvents)}
   
   # TODO: later it would be good to set the right 'top' values during initialization, 
@@ -387,22 +374,25 @@
   # before being positioned - the following is a quick hack til rewritten
   panels["log"].setPaused(True)
   
-  panels["conn"] = connPanel.ConnPanel(stdscr, conn)
+  panels["conn"] = connPanel.ConnPanel(stdscr, conn, isBlindMode)
   panels["control"] = ControlPanel(stdscr, isBlindMode)
   panels["torrc"] = confPanel.ConfPanel(stdscr, confLocation, conn)
   
-  # prevents connection resolution via the connPanel if not being used
-  if isBlindMode: panels["conn"].isDisabled = True
-  
   # provides error if pid coulnd't be determined (hopefully shouldn't happen...)
   if not torPid: log.log(log.WARN, "Unable to resolve tor pid, abandoning connection listing")
   
   # statistical monitors for graph
-  panels["graph"].addStats("bandwidth", bandwidthMonitor.BandwidthMonitor(conn))
-  panels["graph"].addStats("system resources", cpuMemMonitor.CpuMemMonitor(panels["header"]))
-  if not isBlindMode: panels["graph"].addStats("connections", connCountMonitor.ConnCountMonitor(conn))
-  panels["graph"].setStats("bandwidth")
+  panels["graph"].addStats("bandwidth", graphing.bandwidthStats.BandwidthStats(config))
+  panels["graph"].addStats("system resources", graphing.psStats.PsStats(config))
+  if not isBlindMode: panels["graph"].addStats("connections", graphing.connStats.ConnStats())
   
+  # sets graph based on config parameter
+  graphType = CONFIG["features.graph.type"]
+  if graphType == 0: panels["graph"].setStats(None)
+  elif graphType == 1: panels["graph"].setStats("bandwidth")
+  elif graphType == 2 and not isBlindMode: panels["graph"].setStats("connections")
+  elif graphType == 3: panels["graph"].setStats("system resources")
+  
   # listeners that update bandwidth and log panels with Tor status
   sighupTracker = sighupListener()
   conn.add_event_listener(panels["log"])
@@ -412,14 +402,22 @@
   conn.add_event_listener(panels["conn"])
   conn.add_event_listener(sighupTracker)
   
+  # prepopulates bandwidth values from state file
+  if CONFIG["features.graph.bw.prepopulate"]:
+    isSuccessful = panels["graph"].stats["bandwidth"].prepopulateFromState()
+    if isSuccessful: panels["graph"].updateInterval = 4
+  
   # tells Tor to listen to the events we're interested
-  loggedEvents = setEventListening(loggedEvents, conn, isBlindMode)
+  loggedEvents = setEventListening(loggedEvents, isBlindMode)
   panels["log"].loggedEvents = loggedEvents # strips any that couldn't be set
   
   # directs logged TorCtl events to log panel
   TorUtil.loglevel = "DEBUG"
   TorUtil.logfile = panels["log"]
   
+  # tells revised panels to run as daemons
+  panels["header"].start()
+  
   # warns if tor isn't updating descriptors
   try:
     if conn.get_option("FetchUselessDescriptors")[0][1] == "0" and conn.get_option("DirPort")[0][1] == "0":
@@ -435,8 +433,14 @@
   overrideKey = None        # immediately runs with this input rather than waiting for the user if set
   page = 0
   regexFilters = []             # previously used log regex filters
-  panels["popup"].redraw()      # hack to make sure popup has a window instance (not entirely sure why...)
+  panels["popup"].redraw(True)  # hack to make sure popup has a window instance (not entirely sure why...)
   
+  # provides notice about any unused config keys
+  for key in config.getUnusedKeys():
+    log.log(CONFIG["log.configEntryUndefined"], "unrecognized configuration entry: %s" % key)
+  
+  # TODO: popups need to force the panels it covers to redraw (or better, have
+  # a global refresh function for after changing pages, popups, etc)
   while True:
     # tried only refreshing when the screen was resized but it caused a
     # noticeable lag when resizing and didn't have an appreciable effect
@@ -446,16 +450,16 @@
     try:
       # if sighup received then reload related information
       if sighupTracker.isReset:
-        panels["header"]._updateParams(True)
+        #panels["header"]._updateParams(True)
         
         # other panels that use torrc data
         panels["conn"].resetOptions()
-        if not isBlindMode: panels["graph"].stats["connections"].resetOptions(conn)
-        panels["graph"].stats["bandwidth"].resetOptions()
+        #if not isBlindMode: panels["graph"].stats["connections"].resetOptions(conn)
+        #panels["graph"].stats["bandwidth"].resetOptions()
         
         # if bandwidth graph is being shown then height might have changed
         if panels["graph"].currentDisplay == "bandwidth":
-          panels["graph"].height = panels["graph"].stats["bandwidth"].height
+          panels["graph"].setHeight(panels["graph"].stats["bandwidth"].getPreferredHeight())
         
         panels["torrc"].reset()
         sighupTracker.isReset = False
@@ -466,7 +470,7 @@
       # resilient in case of funky changes (such as resizing during popups)
       
       # hack to make sure header picks layout before using the dimensions below
-      panels["header"].getPreferredSize()
+      #panels["header"].getPreferredSize()
       
       startY = 0
       for panelKey in PAGE_S[:2]:
@@ -474,7 +478,7 @@
         panels[panelKey].setParent(stdscr)
         panels[panelKey].setWidth(-1)
         panels[panelKey].setTop(startY)
-        startY += panels[panelKey].height
+        startY += panels[panelKey].getHeight()
       
       panels["popup"].recreate(stdscr, 80, startY)
       
@@ -486,7 +490,7 @@
           panels[panelKey].setParent(stdscr)
           panels[panelKey].setWidth(-1)
           panels[panelKey].setTop(tmpStartY)
-          tmpStartY += panels[panelKey].height
+          tmpStartY += panels[panelKey].getHeight()
       
       # if it's been at least ten seconds since the last BW event Tor's probably done
       if not isUnresponsive and not panels["log"].controlPortClosed and panels["log"].getHeartbeat() >= 10:
@@ -505,7 +509,12 @@
       # I haven't the foggiest why, but doesn't work if redrawn out of order...
       for panelKey in (PAGE_S + PAGES[page]):
         # redrawing popup can result in display flicker when it should be hidden
-        if panelKey != "popup": panels[panelKey].redraw()
+        if panelKey != "popup":
+          if panelKey in ("header", "graph"):
+            # revised panel (handles its own content refreshing)
+            panels[panelKey].redraw()
+          else:
+            panels[panelKey].redraw(True)
       
       stdscr.refresh()
     finally:
@@ -529,7 +538,7 @@
           
           # provides prompt
           panels["control"].setMsg("Are you sure (q again to confirm)?", curses.A_BOLD)
-          panels["control"].redraw()
+          panels["control"].redraw(True)
           
           curses.cbreak()
           confirmationKey = stdscr.getch()
@@ -545,7 +554,19 @@
         # quits arm
         # very occasionally stderr gets "close failed: [Errno 11] Resource temporarily unavailable"
         # this appears to be a python bug: http://bugs.python.org/issue3014
-        hostnames.stop()
+        # (haven't seen this is quite some time... mysteriously resolved?)
+        
+        # joins on utility daemon threads - this might take a moment since
+        # the internal threadpools being joined might be sleeping
+        resolver = connections.getResolver("tor") if connections.isResolverAlive("tor") else None
+        if resolver: resolver.stop()  # sets halt flag (returning immediately)
+        hostnames.stop()              # halts and joins on hostname worker thread pool
+        if resolver: resolver.join()  # joins on halted resolver
+        
+        # stops panel daemons
+        panels["header"].stop()
+        panels["header"].join()
+        
         conn.close() # joins on TorCtl event thread
         break
     elif key == curses.KEY_LEFT or key == curses.KEY_RIGHT:
@@ -566,8 +587,9 @@
       
       # TODO: this redraw doesn't seem necessary (redraws anyway after this
       # loop) - look into this when refactoring
-      panels["control"].redraw()
+      panels["control"].redraw(True)
       
+      selectiveRefresh(panels, page)
     elif key == ord('p') or key == ord('P'):
       # toggles update freezing
       panel.CURSES_LOCK.acquire()
@@ -577,6 +599,8 @@
         panels["control"].setMsg(CTL_PAUSED if isPaused else CTL_HELP)
       finally:
         panel.CURSES_LOCK.release()
+      
+      selectiveRefresh(panels, page)
     elif key == ord('h') or key == ord('H'):
       # displays popup for current page's controls
       panel.CURSES_LOCK.acquire()
@@ -595,8 +619,8 @@
           graphedStats = panels["graph"].currentDisplay
           if not graphedStats: graphedStats = "none"
           popup.addfstr(1, 2, "<b>s</b>: graphed stats (<b>%s</b>)" % graphedStats)
-          popup.addfstr(1, 41, "<b>i</b>: graph update interval (<b>%s</b>)" % panels["graph"].updateInterval)
-          popup.addfstr(2, 2, "<b>b</b>: graph bounds (<b>%s</b>)" % graphPanel.BOUND_LABELS[panels["graph"].bounds])
+          popup.addfstr(1, 41, "<b>i</b>: graph update interval (<b>%s</b>)" % graphing.graphPanel.UPDATE_INTERVALS[panels["graph"].updateInterval][0])
+          popup.addfstr(2, 2, "<b>b</b>: graph bounds (<b>%s</b>)" % graphing.graphPanel.BOUND_LABELS[panels["graph"].bounds])
           popup.addfstr(2, 41, "<b>d</b>: file descriptors")
           popup.addfstr(3, 2, "<b>e</b>: change logged events")
           
@@ -654,6 +678,7 @@
         curses.halfdelay(REFRESH_RATE * 10)
         
         setPauseState(panels, isPaused, page)
+        selectiveRefresh(panels, page)
       finally:
         panel.CURSES_LOCK.release()
     elif page == 0 and (key == ord('s') or key == ord('S')):
@@ -675,7 +700,7 @@
       # hides top label of the graph panel and pauses panels
       if panels["graph"].currentDisplay:
         panels["graph"].showLabel = False
-        panels["graph"].redraw()
+        panels["graph"].redraw(True)
       setPauseState(panels, isPaused, page, True)
       
       selection = showMenu(stdscr, panels["popup"], "Graphed Stats:", options, initialSelection)
@@ -688,18 +713,22 @@
       if selection != -1 and selection != initialSelection:
         if selection == 0: panels["graph"].setStats(None)
         else: panels["graph"].setStats(options[selection].lower())
+      
+      selectiveRefresh(panels, page)
     elif page == 0 and (key == ord('i') or key == ord('I')):
       # provides menu to pick graph panel update interval
-      options = [label for (label, intervalTime) in graphPanel.UPDATE_INTERVALS]
+      options = [label for (label, intervalTime) in graphing.graphPanel.UPDATE_INTERVALS]
       
-      initialSelection = -1
-      for i in range(len(options)):
-        if options[i] == panels["graph"].updateInterval: initialSelection = i
+      initialSelection = panels["graph"].updateInterval
       
+      #initialSelection = -1
+      #for i in range(len(options)):
+      #  if options[i] == panels["graph"].updateInterval: initialSelection = i
+      
       # hides top label of the graph panel and pauses panels
       if panels["graph"].currentDisplay:
         panels["graph"].showLabel = False
-        panels["graph"].redraw()
+        panels["graph"].redraw(True)
       setPauseState(panels, isPaused, page, True)
       
       selection = showMenu(stdscr, panels["popup"], "Update Interval:", options, initialSelection)
@@ -709,10 +738,14 @@
       setPauseState(panels, isPaused, page)
       
       # applies new setting
-      if selection != -1: panels["graph"].updateInterval = options[selection]
+      if selection != -1: panels["graph"].updateInterval = selection
+      
+      selectiveRefresh(panels, page)
     elif page == 0 and (key == ord('b') or key == ord('B')):
       # uses the next boundary type for graph
-      panels["graph"].bounds = (panels["graph"].bounds + 1) % 2
+      panels["graph"].bounds = (panels["graph"].bounds + 1) % 3
+      
+      selectiveRefresh(panels, page)
     elif page == 0 and key in (ord('d'), ord('D')):
       # provides popup with file descriptors
       panel.CURSES_LOCK.acquire()
@@ -734,7 +767,7 @@
         
         # provides prompt
         panels["control"].setMsg("Events to log: ")
-        panels["control"].redraw()
+        panels["control"].redraw(True)
         
         # makes cursor and typing visible
         try: curses.curs_set(1)
@@ -770,11 +803,11 @@
         if eventsInput != "":
           try:
             expandedEvents = logPanel.expandEvents(eventsInput)
-            loggedEvents = setEventListening(expandedEvents, conn, isBlindMode)
+            loggedEvents = setEventListening(expandedEvents, isBlindMode)
             panels["log"].loggedEvents = loggedEvents
           except ValueError, exc:
             panels["control"].setMsg("Invalid flags: %s" % str(exc), curses.A_STANDOUT)
-            panels["control"].redraw()
+            panels["control"].redraw(True)
             time.sleep(2)
         
         # reverts popup dimensions
@@ -794,7 +827,7 @@
       # hides top label of the graph panel and pauses panels
       if panels["graph"].currentDisplay:
         panels["graph"].showLabel = False
-        panels["graph"].redraw()
+        panels["graph"].redraw(True)
       setPauseState(panels, isPaused, page, True)
       
       selection = showMenu(stdscr, panels["popup"], "Log Filter:", options, initialSelection)
@@ -808,7 +841,7 @@
         try:
           # provides prompt
           panels["control"].setMsg("Regular expression: ")
-          panels["control"].redraw()
+          panels["control"].redraw(True)
           
           # makes cursor and typing visible
           try: curses.curs_set(1)
@@ -831,7 +864,7 @@
               regexFilters = [regexInput] + regexFilters
             except re.error, exc:
               panels["control"].setMsg("Unable to compile expression: %s" % str(exc), curses.A_STANDOUT)
-              panels["control"].redraw()
+              panels["control"].redraw(True)
               time.sleep(2)
           panels["control"].setMsg(CTL_PAUSED if isPaused else CTL_HELP)
         finally:
@@ -869,7 +902,7 @@
         # reconfigures connection panel to accomidate details dialog
         panels["conn"].showLabel = False
         panels["conn"].showingDetails = True
-        panels["conn"].redraw()
+        panels["conn"].redraw(True)
         
         hostnames.setPaused(not panels["conn"].allowDNS)
         relayLookupCache = {} # temporary cache of entry -> (ns data, desc data)
@@ -947,10 +980,15 @@
               lookupErrored = False
               if selection in relayLookupCache.keys(): nsEntry, descEntry = relayLookupCache[selection]
               else:
-                # ns lookup fails, can happen with localhost lookups if relay's having problems (orport not reachable)
-                # and this will be empty if network consensus couldn't be fetched
-                try: nsCall = conn.get_network_status("id/%s" % fingerprint)
-                except (socket.error, TorCtl.ErrorReply, TorCtl.TorCtlClosed): lookupErrored = True
+                try:
+                  nsCall = conn.get_network_status("id/%s" % fingerprint)
+                  if len(nsCall) == 0: raise TorCtl.ErrorReply() # no results provided
+                except (socket.error, TorCtl.ErrorReply, TorCtl.TorCtlClosed):
+                  # ns lookup fails or provides empty results - can happen with
+                  # localhost lookups if relay's having problems (orport not
+                  # reachable) and this will be empty if network consensus
+                  # couldn't be fetched
+                  lookupErrored = True
                 
                 if not lookupErrored and nsCall:
                   if len(nsCall) > 1:
@@ -999,7 +1037,7 @@
             panels["conn"].handleKey(key)
           elif key in (ord('d'), ord('D')):
             descriptorPopup.showDescriptorPopup(panels["popup"], stdscr, conn, panels["conn"])
-            panels["conn"].redraw()
+            panels["conn"].redraw(True)
         
         panels["conn"].showLabel = True
         panels["conn"].showingDetails = False
@@ -1015,7 +1053,7 @@
         setPauseState(panels, isPaused, page, True)
         curses.cbreak() # wait indefinitely for key presses (no timeout)
         panels["conn"].showLabel = False
-        panels["conn"].redraw()
+        panels["conn"].redraw(True)
         
         descriptorPopup.showDescriptorPopup(panels["popup"], stdscr, conn, panels["conn"])
         
@@ -1032,7 +1070,7 @@
       
       # hides top label of conn panel and pauses panels
       panels["conn"].showLabel = False
-      panels["conn"].redraw()
+      panels["conn"].redraw(True)
       setPauseState(panels, isPaused, page, True)
       
       selection = showMenu(stdscr, panels["popup"], "List By:", options, initialSelection)
@@ -1068,7 +1106,7 @@
       
       # hides top label of conn panel and pauses panels
       panels["conn"].showLabel = False
-      panels["conn"].redraw()
+      panels["conn"].redraw(True)
       setPauseState(panels, isPaused, page, True)
       
       selection = showMenu(stdscr, panels["popup"], "Resolver Util:", options, initialSelection)
@@ -1201,10 +1239,10 @@
       # reloads torrc, providing a notice if successful or not
       isSuccessful = panels["torrc"].reset(False)
       resetMsg = "torrc reloaded" if isSuccessful else "failed to reload torrc"
-      if isSuccessful: panels["torrc"].redraw()
+      if isSuccessful: panels["torrc"].redraw(True)
       
       panels["control"].setMsg(resetMsg, curses.A_STANDOUT)
-      panels["control"].redraw()
+      panels["control"].redraw(True)
       time.sleep(1)
       
       panels["control"].setMsg(CTL_PAUSED if isPaused else CTL_HELP)
@@ -1216,63 +1254,20 @@
         
         # provides prompt
         panels["control"].setMsg("This will reset Tor's internal state. Are you sure (x again to confirm)?", curses.A_BOLD)
-        panels["control"].redraw()
+        panels["control"].redraw(True)
         
         curses.cbreak()
         confirmationKey = stdscr.getch()
         if confirmationKey in (ord('x'), ord('X')):
           try:
-            conn.send_signal("RELOAD")
-          except Exception, err:
-            # new torrc parameters caused an error (tor's likely shut down)
-            # BUG: this doesn't work - torrc errors still cause TorCtl to crash... :(
-            # http://bugs.noreply.org/flyspray/index.php?do=details&id=1329
-            log.log(log.ERR, "Error detected when reloading tor: %s" % str(err))
-            pass
-          
-          # The following issues a sighup via a system command (Sebastian
-          # mentioned later that sending a RELOAD signal is equivilant, which
-          # is of course far preferable).
-          
-          """
-          try:
-            # Redirects stderr to stdout so we can check error status (output
-            # should be empty if successful). Example error:
-            # pkill: 5592 - Operation not permitted
-            #
-            # note that this may provide multiple errors, even if successful,
-            # hence this:
-            #   - only provide an error if Tor fails to log a sighup
-            #   - provide the error message associated with the tor pid (others
-            #     would be a red herring)
-            pkillCall = os.popen("pkill -sighup tor 2> /dev/stdout")
-            pkillOutput = pkillCall.readlines()
-            pkillCall.close()
+            torTools.getConn().reload()
+          except IOError, exc:
+            log.log(log.ERR, "Error detected when reloading tor: %s" % str(exc))
             
-            # Give the sighupTracker a moment to detect the sighup signal. This
-            # is, of course, a possible concurrency bug. However I'm not sure
-            # of a better method for blocking on this...
-            waitStart = time.time()
-            while time.time() - waitStart < 1:
-              time.sleep(0.1)
-              if sighupTracker.isReset: break
-            
-            if not sighupTracker.isReset:
-              errorLine = ""
-              if torPid:
-                for line in pkillOutput:
-                  if line.startswith("pkill: %s - " % torPid):
-                    errorLine = line
-                    break
-              
-              if errorLine: raise IOError(" ".join(errorLine.split()[3:]))
-              else: raise IOError()
-          except IOError, err:
-            errorMsg = " (%s)" % str(err) if str(err) else ""
-            panels["control"].setMsg("Sighup failed%s" % errorMsg, curses.A_STANDOUT)
-            panels["control"].redraw()
-            time.sleep(2)
-          """
+            #errorMsg = " (%s)" % str(err) if str(err) else ""
+            #panels["control"].setMsg("Sighup failed%s" % errorMsg, curses.A_STANDOUT)
+            #panels["control"].redraw(True)
+            #time.sleep(2)
         
         # reverts display settings
         curses.halfdelay(REFRESH_RATE * 10)
@@ -1287,9 +1282,9 @@
     elif page == 2:
       panels["torrc"].handleKey(key)
 
-def startTorMonitor(conn, loggedEvents, isBlindMode):
+def startTorMonitor(loggedEvents, isBlindMode):
   try:
-    curses.wrapper(drawTorMonitor, conn, loggedEvents, isBlindMode)
+    curses.wrapper(drawTorMonitor, loggedEvents, isBlindMode)
   except KeyboardInterrupt:
     pass # skip printing stack trace in case of keyboard interrupt
 

Deleted: arm/release/interface/cpuMemMonitor.py
===================================================================
--- arm/release/interface/cpuMemMonitor.py	2010-07-07 16:44:54 UTC (rev 22616)
+++ arm/release/interface/cpuMemMonitor.py	2010-07-07 16:48:51 UTC (rev 22617)
@@ -1,54 +0,0 @@
-#!/usr/bin/env python
-# cpuMemMonitor.py -- Tracks cpu and memory usage of Tor.
-# Released under the GPL v3 (http://www.gnu.org/licenses/gpl.html)
-
-import os
-import time
-from TorCtl import TorCtl
-
-from util import uiTools
-import graphPanel
-
-class CpuMemMonitor(graphPanel.GraphStats, TorCtl.PostEventListener):
-  """
-  Tracks system resource usage (cpu and memory usage), using cached values in
-  headerPanel if recent enough (otherwise retrieved independently).
-  """
-  
-  def __init__(self, headerPanel):
-    graphPanel.GraphStats.__init__(self)
-    TorCtl.PostEventListener.__init__(self)
-    graphPanel.GraphStats.initialize(self, "green", "cyan", 10)
-    self.headerPanel = headerPanel  # header panel, used to limit ps calls
-  
-  def bandwidth_event(self, event):
-    # doesn't use events but this keeps it in sync with the bandwidth panel
-    # (and so it stops if Tor stops
-    if self.headerPanel.lastUpdate + 1 >= time.time():
-      # reuses ps results if recent enough
-      self._processEvent(float(self.headerPanel.vals["%cpu"]), float(self.headerPanel.vals["rss"]) / 1024.0)
-    else:
-      # cached results stale - requery ps
-      psCall = os.popen('ps -p %s -o %s 2> /dev/null' % (self.headerPanel.vals["pid"], "%cpu,rss"))
-      try:
-        sampling = psCall.read().strip().split()[2:]
-        psCall.close()
-        
-        if len(sampling) < 2:
-          # either ps failed or returned no tor instance, register error
-          raise IOError()
-        else:
-          self._processEvent(float(sampling[0]), float(sampling[1]) / 1024.0)
-      except IOError:
-        # ps call failed - we need to register something (otherwise timescale
-        # would be thrown off) so keep old results
-        self._processEvent(self.lastPrimary, self.lastSecondary)
-  
-  def getTitle(self, width):
-    return "System Resources:"
-  
-  def getHeaderLabel(self, width, isPrimary):
-    avg = (self.primaryTotal if isPrimary else self.secondaryTotal) / max(1, self.tick)
-    if isPrimary: return "CPU (%s%%, avg: %0.1f%%):" % (self.lastPrimary, avg)
-    else: return "Memory (%s, avg: %s):" % (uiTools.getSizeLabel(self.lastSecondary * 1048576, 1), uiTools.getSizeLabel(avg * 1048576, 1))
-

Modified: arm/release/interface/fileDescriptorPopup.py
===================================================================
--- arm/release/interface/fileDescriptorPopup.py	2010-07-07 16:44:54 UTC (rev 22616)
+++ arm/release/interface/fileDescriptorPopup.py	2010-07-07 16:48:51 UTC (rev 22617)
@@ -5,7 +5,7 @@
 import os
 import curses
 
-from util import panel, uiTools
+from util import panel, sysTools, uiTools
 
 class PopupProperties:
   """
@@ -27,13 +27,10 @@
       
       # retrieves list of open files, options are:
       # n = no dns lookups, p = by pid, -F = show fields (L = login name, n = opened files)
-      lsofCall = os.popen3("lsof -np %s -F Ln 2> /dev/null" % torPid)
-      results = lsofCall[1].readlines()
-      errResults = lsofCall[2].readlines()
+      # TODO: better rewrite to take advantage of sysTools
       
-      # checks if lsof was unavailable
-      if "not found" in "".join(errResults):
-        raise Exception("error: lsof is unavailable")
+      if not sysTools.isAvailable("lsof"): raise Exception("error: lsof is unavailable")
+      results = sysTools.call("lsof -np %s -F Ln" % torPid)
       
       # if we didn't get any results then tor's probably closed (keep defaults)
       if len(results) == 0: return
@@ -77,13 +74,17 @@
         results = ulimitCall.readlines()
         if len(results) == 0: raise Exception("error: ulimit is unavailable")
         self.fdLimit = int(results[0])
+        
+        # can't use sysTools for this call because ulimit isn't in the path...
+        # so how the **** am I to detect if it's available!
+        #if not sysTools.isAvailable("ulimit"): raise Exception("error: ulimit is unavailable")
+        #results = sysTools.call("ulimit -Hn")
+        #if len(results) == 0: raise Exception("error: ulimit call failed")
+        #self.fdLimit = int(results[0])
     except Exception, exc:
       # problem arose in calling or parsing lsof or ulimit calls
       self.errorMsg = str(exc)
     finally:
-      lsofCall[0].close()
-      lsofCall[1].close()
-      lsofCall[2].close()
       if ulimitCall: ulimitCall.close()
   
   def handleKey(self, key, height):

Deleted: arm/release/interface/graphPanel.py
===================================================================
--- arm/release/interface/graphPanel.py	2010-07-07 16:44:54 UTC (rev 22616)
+++ arm/release/interface/graphPanel.py	2010-07-07 16:48:51 UTC (rev 22617)
@@ -1,279 +0,0 @@
-#!/usr/bin/env python
-# graphPanel.py -- Graph providing a variety of statistics.
-# Released under the GPL v3 (http://www.gnu.org/licenses/gpl.html)
-
-import copy
-import curses
-
-from util import panel, uiTools
-
-MAX_GRAPH_COL = 150  # max columns of data in graph
-WIDE_LABELING_GRAPH_COL = 50  # minimum graph columns to use wide spacing for x-axis labels
-
-# enums for graph bounds:
-#   BOUNDS_MAX - global maximum (highest value ever seen)
-#   BOUNDS_TIGHT - local maximum (highest value currently on the graph)
-BOUNDS_MAX, BOUNDS_TIGHT = range(2)
-BOUND_LABELS = {BOUNDS_MAX: "max", BOUNDS_TIGHT: "tight"}
-
-# time intervals at which graphs can be updated
-DEFAULT_UPDATE_INTERVAL = "5 seconds"
-UPDATE_INTERVALS = [("each second", 1),     ("5 seconds", 5),   ("30 seconds", 30),   ("minutely", 60),
-                    ("half hour", 1800),    ("hourly", 3600),   ("daily", 86400)]
-
-class GraphStats:
-  """
-  Module that's expected to update dynamically and provide attributes to be
-  graphed. Up to two graphs (a 'primary' and 'secondary') can be displayed at a
-  time and timescale parameters use the labels defined in UPDATE_INTERVALS.
-  """
-  
-  def __init__(self):
-    """
-    Initializes all parameters to dummy values.
-    """
-    
-    self.primaryColor = None    # colors used to draw stats/graphs
-    self.secondaryColor = None
-    self.height = None          # vertical size of content
-    self.graphPanel = None      # panel where stats are drawn (set when added to GraphPanel)
-    
-    self.isPaused = False
-    self.pauseBuffer = None     # mirror instance used to track updates when pauses - 
-                                # this is a pauseBuffer instance itself if None
-    
-    # tracked stats
-    self.tick = 0               # number of events processed
-    self.lastPrimary = 0        # most recent registered stats
-    self.lastSecondary = 0
-    self.primaryTotal = 0       # sum of all stats seen
-    self.secondaryTotal = 0
-    
-    # timescale dependent stats
-    self.maxPrimary, self.maxSecondary = {}, {}
-    self.primaryCounts, self.secondaryCounts = {}, {}
-    for (label, timescale) in UPDATE_INTERVALS:
-      # recent rates for graph
-      self.maxPrimary[label] = 1
-      self.maxSecondary[label] = 1
-      
-      # historic stats for graph, first is accumulator
-      # iterative insert needed to avoid making shallow copies (nasty, nasty gotcha)
-      self.primaryCounts[label] = (MAX_GRAPH_COL + 1) * [0]
-      self.secondaryCounts[label] = (MAX_GRAPH_COL + 1) * [0]
-  
-  def initialize(self, primaryColor, secondaryColor, height, pauseBuffer=None):
-    """
-    Initializes newly constructed GraphPanel instance.
-    """
-    
-    # used because of python's inability to have overloaded constructors
-    self.primaryColor = primaryColor        # colors used to draw stats/graphs
-    self.secondaryColor = secondaryColor
-    self.height = height
-    
-    # mirror instance used to track updates when paused
-    if not pauseBuffer: self.pauseBuffer = GraphStats()
-    else: self.pauseBuffer = pauseBuffer
-  
-  def getTitle(self, width):
-    """
-    Provides top label.
-    """
-    
-    return ""
-  
-  def getHeaderLabel(self, width, isPrimary):
-    """
-    Provides labeling presented at the top of the graph.
-    """
-    
-    return ""
-  
-  def draw(self, panel):
-    """
-    Allows for any custom drawing monitor wishes to append.
-    """
-    
-    pass
-  
-  def setPaused(self, isPause):
-    """
-    If true, prevents bandwidth updates from being presented. This is a no-op
-    if a pause buffer.
-    """
-    
-    if isPause == self.isPaused or not self.pauseBuffer: return
-    self.isPaused = isPause
-    
-    if self.isPaused: active, inactive = self.pauseBuffer, self
-    else: active, inactive = self, self.pauseBuffer
-    self._parameterSwap(active, inactive)
-  
-  def _parameterSwap(self, active, inactive):
-    """
-    Either overwrites parameters of pauseBuffer or with the current values or
-    vice versa. This is a helper method for setPaused and should be overwritten
-    to append with additional parameters that need to be preserved when paused.
-    """
-    
-    active.tick = inactive.tick
-    active.lastPrimary = inactive.lastPrimary
-    active.lastSecondary = inactive.lastSecondary
-    active.primaryTotal = inactive.primaryTotal
-    active.secondaryTotal = inactive.secondaryTotal
-    active.maxPrimary = dict(inactive.maxPrimary)
-    active.maxSecondary = dict(inactive.maxSecondary)
-    active.primaryCounts = copy.deepcopy(inactive.primaryCounts)
-    active.secondaryCounts = copy.deepcopy(inactive.secondaryCounts)
-  
-  def _processEvent(self, primary, secondary):
-    """
-    Includes new stats in graphs and notifies GraphPanel of changes.
-    """
-    
-    if self.isPaused: self.pauseBuffer._processEvent(primary, secondary)
-    else:
-      self.lastPrimary, self.lastSecondary = primary, secondary
-      self.primaryTotal += primary
-      self.secondaryTotal += secondary
-      
-      # updates for all time intervals
-      self.tick += 1
-      for (label, timescale) in UPDATE_INTERVALS:
-        self.primaryCounts[label][0] += primary
-        self.secondaryCounts[label][0] += secondary
-        
-        if self.tick % timescale == 0:
-          self.maxPrimary[label] = max(self.maxPrimary[label], self.primaryCounts[label][0] / timescale)
-          self.primaryCounts[label][0] /= timescale
-          self.primaryCounts[label].insert(0, 0)
-          del self.primaryCounts[label][MAX_GRAPH_COL + 1:]
-          
-          self.maxSecondary[label] = max(self.maxSecondary[label], self.secondaryCounts[label][0] / timescale)
-          self.secondaryCounts[label][0] /= timescale
-          self.secondaryCounts[label].insert(0, 0)
-          del self.secondaryCounts[label][MAX_GRAPH_COL + 1:]
-      
-      if self.graphPanel: self.graphPanel.redraw()
-
-class GraphPanel(panel.Panel):
-  """
-  Panel displaying a graph, drawing statistics from custom GraphStats
-  implementations.
-  """
-  
-  def __init__(self, stdscr):
-    panel.Panel.__init__(self, stdscr, 0, 0) # height is overwritten with current module
-    self.updateInterval = DEFAULT_UPDATE_INTERVAL
-    self.isPaused = False
-    self.showLabel = True         # shows top label if true, hides otherwise
-    self.bounds = BOUNDS_TIGHT    # determines bounds on graph
-    self.currentDisplay = None    # label of the stats currently being displayed
-    self.stats = {}               # available stats (mappings of label -> instance)
-  
-  def draw(self, subwindow, width, height):
-    """ Redraws graph panel """
-    
-    graphCol = min((width - 10) / 2, MAX_GRAPH_COL)
-    
-    if self.currentDisplay:
-      param = self.stats[self.currentDisplay]
-      primaryColor = uiTools.getColor(param.primaryColor)
-      secondaryColor = uiTools.getColor(param.secondaryColor)
-      
-      if self.showLabel: self.addstr(0, 0, param.getTitle(width), curses.A_STANDOUT)
-      
-      # top labels
-      left, right = param.getHeaderLabel(width / 2, True), param.getHeaderLabel(width / 2, False)
-      if left: self.addstr(1, 0, left, curses.A_BOLD | primaryColor)
-      if right: self.addstr(1, graphCol + 5, right, curses.A_BOLD | secondaryColor)
-      
-      # determines max value on the graph
-      primaryBound, secondaryBound = -1, -1
-      
-      if self.bounds == BOUNDS_MAX:
-        primaryBound = param.maxPrimary[self.updateInterval]
-        secondaryBound = param.maxSecondary[self.updateInterval]
-      elif self.bounds == BOUNDS_TIGHT:
-        for value in param.primaryCounts[self.updateInterval][1:graphCol + 1]: primaryBound = max(value, primaryBound)
-        for value in param.secondaryCounts[self.updateInterval][1:graphCol + 1]: secondaryBound = max(value, secondaryBound)
-      
-      # displays bound
-      self.addstr(2, 0, "%4s" % str(int(primaryBound)), primaryColor)
-      self.addstr(7, 0, "   0", primaryColor)
-      
-      self.addstr(2, graphCol + 5, "%4s" % str(int(secondaryBound)), secondaryColor)
-      self.addstr(7, graphCol + 5, "   0", secondaryColor)
-      
-      # creates bar graph of bandwidth usage over time
-      for col in range(graphCol):
-        colHeight = min(5, 5 * param.primaryCounts[self.updateInterval][col + 1] / max(1, primaryBound))
-        for row in range(colHeight): self.addstr(7 - row, col + 5, " ", curses.A_STANDOUT | primaryColor)
-      
-      for col in range(graphCol):
-        colHeight = min(5, 5 * param.secondaryCounts[self.updateInterval][col + 1] / max(1, secondaryBound))
-        for row in range(colHeight): self.addstr(7 - row, col + graphCol + 10, " ", curses.A_STANDOUT | secondaryColor)
-      
-      # bottom labeling of x-axis
-      intervalSec = 1
-      for (label, timescale) in UPDATE_INTERVALS:
-        if label == self.updateInterval: intervalSec = timescale
-      
-      intervalSpacing = 10 if graphCol >= WIDE_LABELING_GRAPH_COL else 5
-      unitsLabel, decimalPrecision = None, 0
-      for i in range(1, (graphCol + intervalSpacing - 4) / intervalSpacing):
-        loc = i * intervalSpacing
-        timeLabel = uiTools.getTimeLabel(loc * intervalSec, decimalPrecision)
-        
-        if not unitsLabel: unitsLabel = timeLabel[-1]
-        elif unitsLabel != timeLabel[-1]:
-          # upped scale so also up precision of future measurements
-          unitsLabel = timeLabel[-1]
-          decimalPrecision += 1
-        else:
-          # if constrained on space then strips labeling since already provided
-          timeLabel = timeLabel[:-1]
-        
-        self.addstr(8, 4 + loc, timeLabel, primaryColor)
-        self.addstr(8, graphCol + 10 + loc, timeLabel, secondaryColor)
-        
-      # allows for finishing touches by monitor
-      param.draw(self)
-  
-  def addStats(self, label, stats):
-    """
-    Makes GraphStats instance available in the panel.
-    """
-    
-    stats.graphPanel = self
-    self.stats[label] = stats
-    stats.isPaused = True
-  
-  def setStats(self, label):
-    """
-    Sets the current stats instance, hiding panel if None.
-    """
-    
-    if label != self.currentDisplay:
-      if self.currentDisplay: self.stats[self.currentDisplay].setPaused(True)
-      
-      if not label:
-        self.currentDisplay = None
-        self.height = 0
-      elif label in self.stats.keys():
-        self.currentDisplay = label
-        newStats = self.stats[label]
-        self.height = newStats.height
-        newStats.setPaused(self.isPaused)
-      else: raise ValueError("Unrecognized stats label: %s" % label)
-  
-  def setPaused(self, isPause):
-    """
-    If true, prevents bandwidth updates from being presented.
-    """
-    
-    if isPause == self.isPaused: return
-    self.isPaused = isPause
-    if self.currentDisplay: self.stats[self.currentDisplay].setPaused(self.isPaused)
-

Deleted: arm/release/interface/graphing/__init__.py
===================================================================
--- arm/trunk/interface/graphing/__init__.py	2010-07-07 16:44:54 UTC (rev 22616)
+++ arm/release/interface/graphing/__init__.py	2010-07-07 16:48:51 UTC (rev 22617)
@@ -1,6 +0,0 @@
-"""
-Panels, popups, and handlers comprising the arm user interface.
-"""
-
-__all__ = ["graphPanel.py", "bandwidthStats", "connStats", "psStats"]
-

Copied: arm/release/interface/graphing/__init__.py (from rev 22616, arm/trunk/interface/graphing/__init__.py)
===================================================================
--- arm/release/interface/graphing/__init__.py	                        (rev 0)
+++ arm/release/interface/graphing/__init__.py	2010-07-07 16:48:51 UTC (rev 22617)
@@ -0,0 +1,6 @@
+"""
+Panels, popups, and handlers comprising the arm user interface.
+"""
+
+__all__ = ["graphPanel.py", "bandwidthStats", "connStats", "psStats"]
+

Deleted: arm/release/interface/graphing/bandwidthStats.py
===================================================================
--- arm/trunk/interface/graphing/bandwidthStats.py	2010-07-07 16:44:54 UTC (rev 22616)
+++ arm/release/interface/graphing/bandwidthStats.py	2010-07-07 16:48:51 UTC (rev 22617)
@@ -1,348 +0,0 @@
-"""
-Tracks bandwidth usage of the tor process, expanding to include accounting
-stats if they're set.
-"""
-
-import time
-
-import graphPanel
-from util import log, sysTools, torTools, uiTools
-
-DL_COLOR, UL_COLOR = "green", "cyan"
-
-# width at which panel abandons placing optional stats (avg and total) with
-# header in favor of replacing the x-axis label
-COLLAPSE_WIDTH = 135
-
-# valid keys for the accountingInfo mapping
-ACCOUNTING_ARGS = ("status", "resetTime", "read", "written", "readLimit", "writtenLimit")
-
-PREPOPULATE_SUCCESS_MSG = "Read the last day of bandwidth history from the state file"
-PREPOPULATE_FAILURE_MSG = "Unable to prepopulate bandwidth information (%s)"
-
-DEFAULT_CONFIG = {"features.graph.bw.accounting.show": True, "features.graph.bw.accounting.rate": 10, "features.graph.bw.accounting.isTimeLong": False, "log.graph.bw.prepopulateSuccess": log.NOTICE, "log.graph.bw.prepopulateFailure": log.NOTICE}
-
-class BandwidthStats(graphPanel.GraphStats):
-  """
-  Uses tor BW events to generate bandwidth usage graph.
-  """
-  
-  def __init__(self, config=None):
-    graphPanel.GraphStats.__init__(self)
-    
-    self._config = dict(DEFAULT_CONFIG)
-    if config:
-      config.update(self._config)
-      self._config["features.graph.bw.accounting.rate"] = max(1, self._config["features.graph.bw.accounting.rate"])
-    
-    # accounting data (set by _updateAccountingInfo method)
-    self.accountingLastUpdated = 0
-    self.accountingInfo = dict([(arg, "") for arg in ACCOUNTING_ARGS])
-    
-    # listens for tor reload (sighup) events which can reset the bandwidth
-    # rate/burst and if tor's using accounting
-    conn = torTools.getConn()
-    self._titleStats, self.isAccounting = [], False
-    self.resetListener(conn, torTools.TOR_INIT) # initializes values
-    conn.addStatusListener(self.resetListener)
-  
-  def resetListener(self, conn, eventType):
-    # updates title parameters and accounting status if they changed
-    self.new_desc_event(None) # updates title params
-    
-    if self._config["features.graph.bw.accounting.show"]:
-      self.isAccounting = conn.getInfo('accounting/enabled') == '1'
-  
-  def prepopulateFromState(self):
-    """
-    Attempts to use tor's state file to prepopulate values for the 15 minute
-    interval via the BWHistoryReadValues/BWHistoryWriteValues values. This
-    returns True if successful and False otherwise.
-    """
-    
-    # gets the uptime (using the same parameters as the header panel to take
-    # advantage of caching
-    conn, uptime = torTools.getConn(), None
-    queryPid = conn.getMyPid()
-    if queryPid:
-      queryParam = ["%cpu", "rss", "%mem", "etime"]
-      queryCmd = "ps -p %s -o %s" % (queryPid, ",".join(queryParam))
-      psCall = sysTools.call(queryCmd, 3600, True)
-      
-      if psCall and len(psCall) == 2:
-        stats = psCall[1].strip().split()
-        if len(stats) == 4: uptime = stats[3]
-    
-    # checks if tor has been running for at least a day, the reason being that
-    # the state tracks a day's worth of data and this should only prepopulate
-    # results associated with this tor instance
-    if not uptime or not "-" in uptime:
-      msg = PREPOPULATE_FAILURE_MSG % "insufficient uptime"
-      log.log(self._config["log.graph.bw.prepopulateFailure"], msg)
-      return False
-    
-    # get the user's data directory (usually '~/.tor')
-    dataDir = conn.getOption("DataDirectory")
-    if not dataDir:
-      msg = PREPOPULATE_FAILURE_MSG % "data directory not found"
-      log.log(self._config["log.graph.bw.prepopulateFailure"], msg)
-      return False
-    
-    # attempt to open the state file
-    try: stateFile = open("%s/state" % dataDir, "r")
-    except IOError:
-      msg = PREPOPULATE_FAILURE_MSG % "unable to read the state file"
-      log.log(self._config["log.graph.bw.prepopulateFailure"], msg)
-      return False
-    
-    # get the BWHistory entries (ordered oldest to newest) and number of
-    # intervals since last recorded
-    bwReadEntries, bwWriteEntries = None, None
-    missingReadEntries, missingWriteEntries = None, None
-    
-    # converts from gmt to local with respect to DST
-    if time.localtime()[8]: tz_offset = time.altzone
-    else: tz_offset = time.timezone
-    
-    for line in stateFile:
-      line = line.strip()
-      
-      if line.startswith("BWHistoryReadValues"):
-        bwReadEntries = line[20:].split(",")
-        bwReadEntries = [int(entry) / 1024.0 / 900 for entry in bwReadEntries]
-      elif line.startswith("BWHistoryWriteValues"):
-        bwWriteEntries = line[21:].split(",")
-        bwWriteEntries = [int(entry) / 1024.0 / 900 for entry in bwWriteEntries]
-      elif line.startswith("BWHistoryReadEnds"):
-        lastReadTime = time.mktime(time.strptime(line[18:], "%Y-%m-%d %H:%M:%S")) - tz_offset
-        missingReadEntries = int((time.time() - lastReadTime) / 900)
-      elif line.startswith("BWHistoryWriteEnds"):
-        lastWriteTime = time.mktime(time.strptime(line[19:], "%Y-%m-%d %H:%M:%S")) - tz_offset
-        missingWriteEntries = int((time.time() - lastWriteTime) / 900)
-    
-    if not bwReadEntries or not bwWriteEntries or not lastReadTime or not lastWriteTime:
-      msg = PREPOPULATE_FAILURE_MSG % "bandwidth stats missing from state file"
-      log.log(self._config["log.graph.bw.prepopulateFailure"], msg)
-      return False
-    
-    # fills missing entries with the last value
-    bwReadEntries += [bwReadEntries[-1]] * missingReadEntries
-    bwWriteEntries += [bwWriteEntries[-1]] * missingWriteEntries
-    
-    # crops starting entries so they're the same size
-    entryCount = min(len(bwReadEntries), len(bwWriteEntries), self.maxCol)
-    bwReadEntries = bwReadEntries[len(bwReadEntries) - entryCount:]
-    bwWriteEntries = bwWriteEntries[len(bwWriteEntries) - entryCount:]
-    
-    # gets index for 15-minute interval
-    intervalIndex = 0
-    for indexEntry in graphPanel.UPDATE_INTERVALS:
-      if indexEntry[1] == 900: break
-      else: intervalIndex += 1
-    
-    # fills the graphing parameters with state information
-    for i in range(entryCount):
-      readVal, writeVal = bwReadEntries[i], bwWriteEntries[i]
-      
-      self.lastPrimary, self.lastSecondary = readVal, writeVal
-      self.primaryTotal += readVal * 900
-      self.secondaryTotal += writeVal * 900
-      self.tick += 900
-      
-      self.primaryCounts[intervalIndex].insert(0, readVal)
-      self.secondaryCounts[intervalIndex].insert(0, writeVal)
-    
-    self.maxPrimary[intervalIndex] = max(self.primaryCounts)
-    self.maxSecondary[intervalIndex] = max(self.secondaryCounts)
-    del self.primaryCounts[intervalIndex][self.maxCol + 1:]
-    del self.secondaryCounts[intervalIndex][self.maxCol + 1:]
-    
-    msg = PREPOPULATE_SUCCESS_MSG
-    missingSec = time.time() - min(lastReadTime, lastWriteTime)
-    if missingSec: msg += " (%s is missing)" % uiTools.getTimeLabel(missingSec, 0, True)
-    log.log(self._config["log.graph.bw.prepopulateSuccess"], msg)
-    
-    return True
-  
-  def bandwidth_event(self, event):
-    if self.isAccounting and self.isNextTickRedraw():
-      if time.time() - self.accountingLastUpdated >= self._config["features.graph.bw.accounting.rate"]:
-        self._updateAccountingInfo()
-    
-    # scales units from B to KB for graphing
-    self._processEvent(event.read / 1024.0, event.written / 1024.0)
-  
-  def draw(self, panel, width, height):
-    # if display is narrow, overwrites x-axis labels with avg / total stats
-    if width <= COLLAPSE_WIDTH:
-      # clears line
-      panel.addstr(8, 0, " " * width)
-      graphCol = min((width - 10) / 2, self.maxCol)
-      
-      primaryFooter = "%s, %s" % (self._getAvgLabel(True), self._getTotalLabel(True))
-      secondaryFooter = "%s, %s" % (self._getAvgLabel(False), self._getTotalLabel(False))
-      
-      panel.addstr(8, 1, primaryFooter, uiTools.getColor(self.getColor(True)))
-      panel.addstr(8, graphCol + 6, secondaryFooter, uiTools.getColor(self.getColor(False)))
-    
-    # provides accounting stats if enabled
-    if self.isAccounting:
-      if torTools.getConn().isAlive():
-        status = self.accountingInfo["status"]
-        
-        hibernateColor = "green"
-        if status == "soft": hibernateColor = "yellow"
-        elif status == "hard": hibernateColor = "red"
-        elif status == "":
-          # failed to be queried
-          status, hibernateColor = "unknown", "red"
-        
-        panel.addfstr(10, 0, "<b>Accounting (<%s>%s</%s>)</b>" % (hibernateColor, status, hibernateColor))
-        
-        resetTime = self.accountingInfo["resetTime"]
-        if not resetTime: resetTime = "unknown"
-        panel.addstr(10, 35, "Time to reset: %s" % resetTime)
-        
-        used, total = self.accountingInfo["read"], self.accountingInfo["readLimit"]
-        if used and total:
-          panel.addstr(11, 2, "%s / %s" % (used, total), uiTools.getColor(self.getColor(True)))
-        
-        used, total = self.accountingInfo["written"], self.accountingInfo["writtenLimit"]
-        if used and total:
-          panel.addstr(11, 37, "%s / %s" % (used, total), uiTools.getColor(self.getColor(False)))
-      else:
-        panel.addfstr(10, 0, "<b>Accounting:</b> Connection Closed...")
-  
-  def getTitle(self, width):
-    stats = list(self._titleStats)
-    
-    while True:
-      if not stats: return "Bandwidth:"
-      else:
-        label = "Bandwidth (%s):" % ", ".join(stats)
-        
-        if len(label) > width: del stats[-1]
-        else: return label
-  
-  def getHeaderLabel(self, width, isPrimary):
-    graphType = "Downloaded" if isPrimary else "Uploaded"
-    stats = [""]
-    
-    # if wide then avg and total are part of the header, otherwise they're on
-    # the x-axis
-    if width * 2 > COLLAPSE_WIDTH:
-      stats = [""] * 3
-      stats[1] = "- %s" % self._getAvgLabel(isPrimary)
-      stats[2] = ", %s" % self._getTotalLabel(isPrimary)
-    
-    stats[0] = "%-14s" % ("%s/sec" % uiTools.getSizeLabel((self.lastPrimary if isPrimary else self.lastSecondary) * 1024, 1))
-    
-    # drops label's components if there's not enough space
-    labeling = graphType + " (" + "".join(stats).strip() + "):"
-    while len(labeling) >= width:
-      if len(stats) > 1:
-        del stats[-1]
-        labeling = graphType + " (" + "".join(stats).strip() + "):"
-      else:
-        labeling = graphType + ":"
-        break
-    
-    return labeling
-  
-  def getColor(self, isPrimary):
-    return DL_COLOR if isPrimary else UL_COLOR
-  
-  def getPreferredHeight(self):
-    return 13 if self.isAccounting else 10
-  
-  def new_desc_event(self, event):
-    # updates self._titleStats with updated values
-    conn = torTools.getConn()
-    if not conn.isAlive(): return # keep old values
-    
-    myFingerprint = conn.getMyFingerprint()
-    if not self._titleStats or not myFingerprint or (event and myFingerprint in event.idlist):
-      stats = []
-      bwRate = conn.getMyBandwidthRate()
-      bwBurst = conn.getMyBandwidthBurst()
-      bwObserved = conn.getMyBandwidthObserved()
-      bwMeasured = conn.getMyBandwidthMeasured()
-      
-      if bwRate and bwBurst:
-        bwRateLabel = uiTools.getSizeLabel(bwRate, 1)
-        bwBurstLabel = uiTools.getSizeLabel(bwBurst, 1)
-        
-        # if both are using rounded values then strip off the ".0" decimal
-        if ".0" in bwRateLabel and ".0" in bwBurstLabel:
-          bwRateLabel = bwRateLabel.replace(".0", "")
-          bwBurstLabel = bwBurstLabel.replace(".0", "")
-        
-        stats.append("limit: %s" % bwRateLabel)
-        stats.append("burst: %s" % bwBurstLabel)
-      
-      # Provide the observed bandwidth either if the measured bandwidth isn't
-      # available or if the measured bandwidth is the observed (this happens
-      # if there isn't yet enough bandwidth measurements).
-      if bwObserved and (not bwMeasured or bwMeasured == bwObserved):
-        stats.append("observed: %s" % uiTools.getSizeLabel(bwObserved, 1))
-      elif bwMeasured:
-        stats.append("measured: %s" % uiTools.getSizeLabel(bwMeasured, 1))
-      
-      self._titleStats = stats
-  
-  def _getAvgLabel(self, isPrimary):
-    total = self.primaryTotal if isPrimary else self.secondaryTotal
-    return "avg: %s/sec" % uiTools.getSizeLabel((total / max(1, self.tick)) * 1024, 1)
-  
-  def _getTotalLabel(self, isPrimary):
-    total = self.primaryTotal if isPrimary else self.secondaryTotal
-    return "total: %s" % uiTools.getSizeLabel(total * 1024, 1)
-  
-  def _updateAccountingInfo(self):
-    """
-    Updates mapping used for accounting info. This includes the following keys:
-    status, resetTime, read, written, readLimit, writtenLimit
-    
-    Any failed lookups result in a mapping to an empty string.
-    """
-    
-    conn = torTools.getConn()
-    queried = dict([(arg, "") for arg in ACCOUNTING_ARGS])
-    queried["status"] = conn.getInfo("accounting/hibernating")
-    
-    # provides a nicely formatted reset time
-    endInterval = conn.getInfo("accounting/interval-end")
-    if endInterval:
-      # converts from gmt to local with respect to DST
-      if time.localtime()[8]: tz_offset = time.altzone
-      else: tz_offset = time.timezone
-      
-      sec = time.mktime(time.strptime(endInterval, "%Y-%m-%d %H:%M:%S")) - time.time() - tz_offset
-      if self._config["features.graph.bw.accounting.isTimeLong"]:
-        queried["resetTime"] = ", ".join(uiTools.getTimeLabels(sec, True))
-      else:
-        days = sec / 86400
-        sec %= 86400
-        hours = sec / 3600
-        sec %= 3600
-        minutes = sec / 60
-        sec %= 60
-        queried["resetTime"] = "%i:%02i:%02i:%02i" % (days, hours, minutes, sec)
-    
-    # number of bytes used and in total for the accounting period
-    used = conn.getInfo("accounting/bytes")
-    left = conn.getInfo("accounting/bytes-left")
-    
-    if used and left:
-      usedComp, leftComp = used.split(" "), left.split(" ")
-      read, written = int(usedComp[0]), int(usedComp[1])
-      readLeft, writtenLeft = int(leftComp[0]), int(leftComp[1])
-      
-      queried["read"] = uiTools.getSizeLabel(read)
-      queried["written"] = uiTools.getSizeLabel(written)
-      queried["readLimit"] = uiTools.getSizeLabel(read + readLeft)
-      queried["writtenLimit"] = uiTools.getSizeLabel(written + writtenLeft)
-    
-    self.accountingInfo = queried
-    self.accountingLastUpdated = time.time()
-

Copied: arm/release/interface/graphing/bandwidthStats.py (from rev 22616, arm/trunk/interface/graphing/bandwidthStats.py)
===================================================================
--- arm/release/interface/graphing/bandwidthStats.py	                        (rev 0)
+++ arm/release/interface/graphing/bandwidthStats.py	2010-07-07 16:48:51 UTC (rev 22617)
@@ -0,0 +1,348 @@
+"""
+Tracks bandwidth usage of the tor process, expanding to include accounting
+stats if they're set.
+"""
+
+import time
+
+import graphPanel
+from util import log, sysTools, torTools, uiTools
+
+DL_COLOR, UL_COLOR = "green", "cyan"
+
+# width at which panel abandons placing optional stats (avg and total) with
+# header in favor of replacing the x-axis label
+COLLAPSE_WIDTH = 135
+
+# valid keys for the accountingInfo mapping
+ACCOUNTING_ARGS = ("status", "resetTime", "read", "written", "readLimit", "writtenLimit")
+
+PREPOPULATE_SUCCESS_MSG = "Read the last day of bandwidth history from the state file"
+PREPOPULATE_FAILURE_MSG = "Unable to prepopulate bandwidth information (%s)"
+
+DEFAULT_CONFIG = {"features.graph.bw.accounting.show": True, "features.graph.bw.accounting.rate": 10, "features.graph.bw.accounting.isTimeLong": False, "log.graph.bw.prepopulateSuccess": log.NOTICE, "log.graph.bw.prepopulateFailure": log.NOTICE}
+
+class BandwidthStats(graphPanel.GraphStats):
+  """
+  Uses tor BW events to generate bandwidth usage graph.
+  """
+  
+  def __init__(self, config=None):
+    graphPanel.GraphStats.__init__(self)
+    
+    self._config = dict(DEFAULT_CONFIG)
+    if config:
+      config.update(self._config)
+      self._config["features.graph.bw.accounting.rate"] = max(1, self._config["features.graph.bw.accounting.rate"])
+    
+    # accounting data (set by _updateAccountingInfo method)
+    self.accountingLastUpdated = 0
+    self.accountingInfo = dict([(arg, "") for arg in ACCOUNTING_ARGS])
+    
+    # listens for tor reload (sighup) events which can reset the bandwidth
+    # rate/burst and if tor's using accounting
+    conn = torTools.getConn()
+    self._titleStats, self.isAccounting = [], False
+    self.resetListener(conn, torTools.TOR_INIT) # initializes values
+    conn.addStatusListener(self.resetListener)
+  
+  def resetListener(self, conn, eventType):
+    # updates title parameters and accounting status if they changed
+    self.new_desc_event(None) # updates title params
+    
+    if self._config["features.graph.bw.accounting.show"]:
+      self.isAccounting = conn.getInfo('accounting/enabled') == '1'
+  
+  def prepopulateFromState(self):
+    """
+    Attempts to use tor's state file to prepopulate values for the 15 minute
+    interval via the BWHistoryReadValues/BWHistoryWriteValues values. This
+    returns True if successful and False otherwise.
+    """
+    
+    # gets the uptime (using the same parameters as the header panel to take
+    # advantage of caching
+    conn, uptime = torTools.getConn(), None
+    queryPid = conn.getMyPid()
+    if queryPid:
+      queryParam = ["%cpu", "rss", "%mem", "etime"]
+      queryCmd = "ps -p %s -o %s" % (queryPid, ",".join(queryParam))
+      psCall = sysTools.call(queryCmd, 3600, True)
+      
+      if psCall and len(psCall) == 2:
+        stats = psCall[1].strip().split()
+        if len(stats) == 4: uptime = stats[3]
+    
+    # checks if tor has been running for at least a day, the reason being that
+    # the state tracks a day's worth of data and this should only prepopulate
+    # results associated with this tor instance
+    if not uptime or not "-" in uptime:
+      msg = PREPOPULATE_FAILURE_MSG % "insufficient uptime"
+      log.log(self._config["log.graph.bw.prepopulateFailure"], msg)
+      return False
+    
+    # get the user's data directory (usually '~/.tor')
+    dataDir = conn.getOption("DataDirectory")
+    if not dataDir:
+      msg = PREPOPULATE_FAILURE_MSG % "data directory not found"
+      log.log(self._config["log.graph.bw.prepopulateFailure"], msg)
+      return False
+    
+    # attempt to open the state file
+    try: stateFile = open("%s/state" % dataDir, "r")
+    except IOError:
+      msg = PREPOPULATE_FAILURE_MSG % "unable to read the state file"
+      log.log(self._config["log.graph.bw.prepopulateFailure"], msg)
+      return False
+    
+    # get the BWHistory entries (ordered oldest to newest) and number of
+    # intervals since last recorded
+    bwReadEntries, bwWriteEntries = None, None
+    missingReadEntries, missingWriteEntries = None, None
+    
+    # converts from gmt to local with respect to DST
+    if time.localtime()[8]: tz_offset = time.altzone
+    else: tz_offset = time.timezone
+    
+    for line in stateFile:
+      line = line.strip()
+      
+      if line.startswith("BWHistoryReadValues"):
+        bwReadEntries = line[20:].split(",")
+        bwReadEntries = [int(entry) / 1024.0 / 900 for entry in bwReadEntries]
+      elif line.startswith("BWHistoryWriteValues"):
+        bwWriteEntries = line[21:].split(",")
+        bwWriteEntries = [int(entry) / 1024.0 / 900 for entry in bwWriteEntries]
+      elif line.startswith("BWHistoryReadEnds"):
+        lastReadTime = time.mktime(time.strptime(line[18:], "%Y-%m-%d %H:%M:%S")) - tz_offset
+        missingReadEntries = int((time.time() - lastReadTime) / 900)
+      elif line.startswith("BWHistoryWriteEnds"):
+        lastWriteTime = time.mktime(time.strptime(line[19:], "%Y-%m-%d %H:%M:%S")) - tz_offset
+        missingWriteEntries = int((time.time() - lastWriteTime) / 900)
+    
+    if not bwReadEntries or not bwWriteEntries or not lastReadTime or not lastWriteTime:
+      msg = PREPOPULATE_FAILURE_MSG % "bandwidth stats missing from state file"
+      log.log(self._config["log.graph.bw.prepopulateFailure"], msg)
+      return False
+    
+    # fills missing entries with the last value
+    bwReadEntries += [bwReadEntries[-1]] * missingReadEntries
+    bwWriteEntries += [bwWriteEntries[-1]] * missingWriteEntries
+    
+    # crops starting entries so they're the same size
+    entryCount = min(len(bwReadEntries), len(bwWriteEntries), self.maxCol)
+    bwReadEntries = bwReadEntries[len(bwReadEntries) - entryCount:]
+    bwWriteEntries = bwWriteEntries[len(bwWriteEntries) - entryCount:]
+    
+    # gets index for 15-minute interval
+    intervalIndex = 0
+    for indexEntry in graphPanel.UPDATE_INTERVALS:
+      if indexEntry[1] == 900: break
+      else: intervalIndex += 1
+    
+    # fills the graphing parameters with state information
+    for i in range(entryCount):
+      readVal, writeVal = bwReadEntries[i], bwWriteEntries[i]
+      
+      self.lastPrimary, self.lastSecondary = readVal, writeVal
+      self.primaryTotal += readVal * 900
+      self.secondaryTotal += writeVal * 900
+      self.tick += 900
+      
+      self.primaryCounts[intervalIndex].insert(0, readVal)
+      self.secondaryCounts[intervalIndex].insert(0, writeVal)
+    
+    self.maxPrimary[intervalIndex] = max(self.primaryCounts)
+    self.maxSecondary[intervalIndex] = max(self.secondaryCounts)
+    del self.primaryCounts[intervalIndex][self.maxCol + 1:]
+    del self.secondaryCounts[intervalIndex][self.maxCol + 1:]
+    
+    msg = PREPOPULATE_SUCCESS_MSG
+    missingSec = time.time() - min(lastReadTime, lastWriteTime)
+    if missingSec: msg += " (%s is missing)" % uiTools.getTimeLabel(missingSec, 0, True)
+    log.log(self._config["log.graph.bw.prepopulateSuccess"], msg)
+    
+    return True
+  
+  def bandwidth_event(self, event):
+    if self.isAccounting and self.isNextTickRedraw():
+      if time.time() - self.accountingLastUpdated >= self._config["features.graph.bw.accounting.rate"]:
+        self._updateAccountingInfo()
+    
+    # scales units from B to KB for graphing
+    self._processEvent(event.read / 1024.0, event.written / 1024.0)
+  
+  def draw(self, panel, width, height):
+    # if display is narrow, overwrites x-axis labels with avg / total stats
+    if width <= COLLAPSE_WIDTH:
+      # clears line
+      panel.addstr(8, 0, " " * width)
+      graphCol = min((width - 10) / 2, self.maxCol)
+      
+      primaryFooter = "%s, %s" % (self._getAvgLabel(True), self._getTotalLabel(True))
+      secondaryFooter = "%s, %s" % (self._getAvgLabel(False), self._getTotalLabel(False))
+      
+      panel.addstr(8, 1, primaryFooter, uiTools.getColor(self.getColor(True)))
+      panel.addstr(8, graphCol + 6, secondaryFooter, uiTools.getColor(self.getColor(False)))
+    
+    # provides accounting stats if enabled
+    if self.isAccounting:
+      if torTools.getConn().isAlive():
+        status = self.accountingInfo["status"]
+        
+        hibernateColor = "green"
+        if status == "soft": hibernateColor = "yellow"
+        elif status == "hard": hibernateColor = "red"
+        elif status == "":
+          # failed to be queried
+          status, hibernateColor = "unknown", "red"
+        
+        panel.addfstr(10, 0, "<b>Accounting (<%s>%s</%s>)</b>" % (hibernateColor, status, hibernateColor))
+        
+        resetTime = self.accountingInfo["resetTime"]
+        if not resetTime: resetTime = "unknown"
+        panel.addstr(10, 35, "Time to reset: %s" % resetTime)
+        
+        used, total = self.accountingInfo["read"], self.accountingInfo["readLimit"]
+        if used and total:
+          panel.addstr(11, 2, "%s / %s" % (used, total), uiTools.getColor(self.getColor(True)))
+        
+        used, total = self.accountingInfo["written"], self.accountingInfo["writtenLimit"]
+        if used and total:
+          panel.addstr(11, 37, "%s / %s" % (used, total), uiTools.getColor(self.getColor(False)))
+      else:
+        panel.addfstr(10, 0, "<b>Accounting:</b> Connection Closed...")
+  
+  def getTitle(self, width):
+    stats = list(self._titleStats)
+    
+    while True:
+      if not stats: return "Bandwidth:"
+      else:
+        label = "Bandwidth (%s):" % ", ".join(stats)
+        
+        if len(label) > width: del stats[-1]
+        else: return label
+  
+  def getHeaderLabel(self, width, isPrimary):
+    graphType = "Downloaded" if isPrimary else "Uploaded"
+    stats = [""]
+    
+    # if wide then avg and total are part of the header, otherwise they're on
+    # the x-axis
+    if width * 2 > COLLAPSE_WIDTH:
+      stats = [""] * 3
+      stats[1] = "- %s" % self._getAvgLabel(isPrimary)
+      stats[2] = ", %s" % self._getTotalLabel(isPrimary)
+    
+    stats[0] = "%-14s" % ("%s/sec" % uiTools.getSizeLabel((self.lastPrimary if isPrimary else self.lastSecondary) * 1024, 1))
+    
+    # drops label's components if there's not enough space
+    labeling = graphType + " (" + "".join(stats).strip() + "):"
+    while len(labeling) >= width:
+      if len(stats) > 1:
+        del stats[-1]
+        labeling = graphType + " (" + "".join(stats).strip() + "):"
+      else:
+        labeling = graphType + ":"
+        break
+    
+    return labeling
+  
+  def getColor(self, isPrimary):
+    return DL_COLOR if isPrimary else UL_COLOR
+  
+  def getPreferredHeight(self):
+    return 13 if self.isAccounting else 10
+  
+  def new_desc_event(self, event):
+    # updates self._titleStats with updated values
+    conn = torTools.getConn()
+    if not conn.isAlive(): return # keep old values
+    
+    myFingerprint = conn.getMyFingerprint()
+    if not self._titleStats or not myFingerprint or (event and myFingerprint in event.idlist):
+      stats = []
+      bwRate = conn.getMyBandwidthRate()
+      bwBurst = conn.getMyBandwidthBurst()
+      bwObserved = conn.getMyBandwidthObserved()
+      bwMeasured = conn.getMyBandwidthMeasured()
+      
+      if bwRate and bwBurst:
+        bwRateLabel = uiTools.getSizeLabel(bwRate, 1)
+        bwBurstLabel = uiTools.getSizeLabel(bwBurst, 1)
+        
+        # if both are using rounded values then strip off the ".0" decimal
+        if ".0" in bwRateLabel and ".0" in bwBurstLabel:
+          bwRateLabel = bwRateLabel.replace(".0", "")
+          bwBurstLabel = bwBurstLabel.replace(".0", "")
+        
+        stats.append("limit: %s" % bwRateLabel)
+        stats.append("burst: %s" % bwBurstLabel)
+      
+      # Provide the observed bandwidth either if the measured bandwidth isn't
+      # available or if the measured bandwidth is the observed (this happens
+      # if there isn't yet enough bandwidth measurements).
+      if bwObserved and (not bwMeasured or bwMeasured == bwObserved):
+        stats.append("observed: %s" % uiTools.getSizeLabel(bwObserved, 1))
+      elif bwMeasured:
+        stats.append("measured: %s" % uiTools.getSizeLabel(bwMeasured, 1))
+      
+      self._titleStats = stats
+  
+  def _getAvgLabel(self, isPrimary):
+    total = self.primaryTotal if isPrimary else self.secondaryTotal
+    return "avg: %s/sec" % uiTools.getSizeLabel((total / max(1, self.tick)) * 1024, 1)
+  
+  def _getTotalLabel(self, isPrimary):
+    total = self.primaryTotal if isPrimary else self.secondaryTotal
+    return "total: %s" % uiTools.getSizeLabel(total * 1024, 1)
+  
+  def _updateAccountingInfo(self):
+    """
+    Updates mapping used for accounting info. This includes the following keys:
+    status, resetTime, read, written, readLimit, writtenLimit
+    
+    Any failed lookups result in a mapping to an empty string.
+    """
+    
+    conn = torTools.getConn()
+    queried = dict([(arg, "") for arg in ACCOUNTING_ARGS])
+    queried["status"] = conn.getInfo("accounting/hibernating")
+    
+    # provides a nicely formatted reset time
+    endInterval = conn.getInfo("accounting/interval-end")
+    if endInterval:
+      # converts from gmt to local with respect to DST
+      if time.localtime()[8]: tz_offset = time.altzone
+      else: tz_offset = time.timezone
+      
+      sec = time.mktime(time.strptime(endInterval, "%Y-%m-%d %H:%M:%S")) - time.time() - tz_offset
+      if self._config["features.graph.bw.accounting.isTimeLong"]:
+        queried["resetTime"] = ", ".join(uiTools.getTimeLabels(sec, True))
+      else:
+        days = sec / 86400
+        sec %= 86400
+        hours = sec / 3600
+        sec %= 3600
+        minutes = sec / 60
+        sec %= 60
+        queried["resetTime"] = "%i:%02i:%02i:%02i" % (days, hours, minutes, sec)
+    
+    # number of bytes used and in total for the accounting period
+    used = conn.getInfo("accounting/bytes")
+    left = conn.getInfo("accounting/bytes-left")
+    
+    if used and left:
+      usedComp, leftComp = used.split(" "), left.split(" ")
+      read, written = int(usedComp[0]), int(usedComp[1])
+      readLeft, writtenLeft = int(leftComp[0]), int(leftComp[1])
+      
+      queried["read"] = uiTools.getSizeLabel(read)
+      queried["written"] = uiTools.getSizeLabel(written)
+      queried["readLimit"] = uiTools.getSizeLabel(read + readLeft)
+      queried["writtenLimit"] = uiTools.getSizeLabel(written + writtenLeft)
+    
+    self.accountingInfo = queried
+    self.accountingLastUpdated = time.time()
+

Deleted: arm/release/interface/graphing/connStats.py
===================================================================
--- arm/trunk/interface/graphing/connStats.py	2010-07-07 16:44:54 UTC (rev 22616)
+++ arm/release/interface/graphing/connStats.py	2010-07-07 16:48:51 UTC (rev 22617)
@@ -1,51 +0,0 @@
-"""
-Tracks stats concerning tor's current connections.
-"""
-
-import graphPanel
-from util import connections, torTools
-
-class ConnStats(graphPanel.GraphStats):
-  """
-  Tracks number of connections, counting client and directory connections as 
-  outbound. Control connections are excluded from counts.
-  """
-  
-  def __init__(self):
-    graphPanel.GraphStats.__init__(self)
-    
-    # listens for tor reload (sighup) events which can reset the ports tor uses
-    conn = torTools.getConn()
-    self.orPort, self.dirPort, self.controlPort = "0", "0", "0"
-    self.resetListener(conn, torTools.TOR_INIT) # initialize port values
-    conn.addStatusListener(self.resetListener)
-  
-  def resetListener(self, conn, eventType):
-    if eventType == torTools.TOR_INIT:
-      self.orPort = conn.getOption("ORPort", "0")
-      self.dirPort = conn.getOption("DirPort", "0")
-      self.controlPort = conn.getOption("ControlPort", "0")
-  
-  def eventTick(self):
-    """
-    Fetches connection stats from cached information.
-    """
-    
-    inboundCount, outboundCount = 0, 0
-    
-    for entry in connections.getResolver("tor").getConnections():
-      localPort = entry[1]
-      if localPort in (self.orPort, self.dirPort): inboundCount += 1
-      elif localPort == self.controlPort: pass # control connection
-      else: outboundCount += 1
-    
-    self._processEvent(inboundCount, outboundCount)
-  
-  def getTitle(self, width):
-    return "Connection Count:"
-  
-  def getHeaderLabel(self, width, isPrimary):
-    avg = (self.primaryTotal if isPrimary else self.secondaryTotal) / max(1, self.tick)
-    if isPrimary: return "Inbound (%s, avg: %s):" % (self.lastPrimary, avg)
-    else: return "Outbound (%s, avg: %s):" % (self.lastSecondary, avg)
-

Copied: arm/release/interface/graphing/connStats.py (from rev 22616, arm/trunk/interface/graphing/connStats.py)
===================================================================
--- arm/release/interface/graphing/connStats.py	                        (rev 0)
+++ arm/release/interface/graphing/connStats.py	2010-07-07 16:48:51 UTC (rev 22617)
@@ -0,0 +1,51 @@
+"""
+Tracks stats concerning tor's current connections.
+"""
+
+import graphPanel
+from util import connections, torTools
+
+class ConnStats(graphPanel.GraphStats):
+  """
+  Tracks number of connections, counting client and directory connections as 
+  outbound. Control connections are excluded from counts.
+  """
+  
+  def __init__(self):
+    graphPanel.GraphStats.__init__(self)
+    
+    # listens for tor reload (sighup) events which can reset the ports tor uses
+    conn = torTools.getConn()
+    self.orPort, self.dirPort, self.controlPort = "0", "0", "0"
+    self.resetListener(conn, torTools.TOR_INIT) # initialize port values
+    conn.addStatusListener(self.resetListener)
+  
+  def resetListener(self, conn, eventType):
+    if eventType == torTools.TOR_INIT:
+      self.orPort = conn.getOption("ORPort", "0")
+      self.dirPort = conn.getOption("DirPort", "0")
+      self.controlPort = conn.getOption("ControlPort", "0")
+  
+  def eventTick(self):
+    """
+    Fetches connection stats from cached information.
+    """
+    
+    inboundCount, outboundCount = 0, 0
+    
+    for entry in connections.getResolver("tor").getConnections():
+      localPort = entry[1]
+      if localPort in (self.orPort, self.dirPort): inboundCount += 1
+      elif localPort == self.controlPort: pass # control connection
+      else: outboundCount += 1
+    
+    self._processEvent(inboundCount, outboundCount)
+  
+  def getTitle(self, width):
+    return "Connection Count:"
+  
+  def getHeaderLabel(self, width, isPrimary):
+    avg = (self.primaryTotal if isPrimary else self.secondaryTotal) / max(1, self.tick)
+    if isPrimary: return "Inbound (%s, avg: %s):" % (self.lastPrimary, avg)
+    else: return "Outbound (%s, avg: %s):" % (self.lastSecondary, avg)
+

Deleted: arm/release/interface/graphing/graphPanel.py
===================================================================
--- arm/trunk/interface/graphing/graphPanel.py	2010-07-07 16:44:54 UTC (rev 22616)
+++ arm/release/interface/graphing/graphPanel.py	2010-07-07 16:48:51 UTC (rev 22617)
@@ -1,363 +0,0 @@
-"""
-Flexible panel for presenting bar graphs for a variety of stats. This panel is
-just concerned with the rendering of information, which is actually collected
-and stored by implementations of the GraphStats interface. Panels are made up
-of a title, followed by headers and graphs for two sets of stats. For
-instance...
-
-Bandwidth (cap: 5 MB, burst: 10 MB):
-Downloaded (0.0 B/sec):           Uploaded (0.0 B/sec):
-  34                                30
-                            *                                 *
-                    **  *   *                          *      **
-      *   *  *      ** **   **          ***  **       ** **   **
-     *********      ******  ******     *********      ******  ******
-   0 ************ ****************   0 ************ ****************
-         25s  50   1m   1.6  2.0           25s  50   1m   1.6  2.0
-"""
-
-import copy
-import curses
-from TorCtl import TorCtl
-
-from util import panel, uiTools
-
-# time intervals at which graphs can be updated
-UPDATE_INTERVALS = [("each second", 1), ("5 seconds", 5),   ("30 seconds", 30),
-                    ("minutely", 60),   ("15 minute", 900), ("30 minute", 1800),
-                    ("hourly", 3600),   ("daily", 86400)]
-
-DEFAULT_HEIGHT = 10 # space needed for graph and content
-DEFAULT_COLOR_PRIMARY, DEFAULT_COLOR_SECONDARY = "green", "cyan"
-
-# enums for graph bounds:
-#   BOUNDS_GLOBAL_MAX - global maximum (highest value ever seen)
-#   BOUNDS_LOCAL_MAX - local maximum (highest value currently on the graph)
-#   BOUNDS_TIGHT - local maximum and minimum
-BOUNDS_GLOBAL_MAX, BOUNDS_LOCAL_MAX, BOUNDS_TIGHT = range(3)
-BOUND_LABELS = {BOUNDS_GLOBAL_MAX: "global max", BOUNDS_LOCAL_MAX: "local max", BOUNDS_TIGHT: "tight"}
-
-WIDE_LABELING_GRAPH_COL = 50  # minimum graph columns to use wide spacing for x-axis labels
-
-# used for setting defaults when initializing GraphStats and GraphPanel instances
-CONFIG = {"features.graph.interval": 0, "features.graph.bound": 1, "features.graph.maxSize": 150, "features.graph.frequentRefresh": True}
-
-def loadConfig(config):
-  config.update(CONFIG)
-  CONFIG["features.graph.interval"] = max(len(UPDATE_INTERVALS) - 1, min(0, CONFIG["features.graph.interval"]))
-  CONFIG["features.graph.bound"] = max(2, min(0, CONFIG["features.graph.bound"]))
-  CONFIG["features.graph.maxSize"] = max(CONFIG["features.graph.maxSize"], 1)
-
-class GraphStats(TorCtl.PostEventListener):
-  """
-  Module that's expected to update dynamically and provide attributes to be
-  graphed. Up to two graphs (a 'primary' and 'secondary') can be displayed at a
-  time and timescale parameters use the labels defined in UPDATE_INTERVALS.
-  """
-  
-  def __init__(self, isPauseBuffer=False):
-    """
-    Initializes parameters needed to present a graph.
-    """
-    
-    TorCtl.PostEventListener.__init__(self)
-    
-    # panel to be redrawn when updated (set when added to GraphPanel)
-    self._graphPanel = None
-    
-    # mirror instance used to track updates when paused
-    self.isPaused, self.isPauseBuffer = False, isPauseBuffer
-    if isPauseBuffer: self._pauseBuffer = None
-    else: self._pauseBuffer = GraphStats(True)
-    
-    # tracked stats
-    self.tick = 0                                 # number of processed events
-    self.lastPrimary, self.lastSecondary = 0, 0   # most recent registered stats
-    self.primaryTotal, self.secondaryTotal = 0, 0 # sum of all stats seen
-    
-    # timescale dependent stats
-    self.maxCol = CONFIG["features.graph.maxSize"]
-    self.maxPrimary, self.maxSecondary = {}, {}
-    self.primaryCounts, self.secondaryCounts = {}, {}
-    
-    for i in range(len(UPDATE_INTERVALS)):
-      # recent rates for graph
-      self.maxPrimary[i] = 0
-      self.maxSecondary[i] = 0
-      
-      # historic stats for graph, first is accumulator
-      # iterative insert needed to avoid making shallow copies (nasty, nasty gotcha)
-      self.primaryCounts[i] = (self.maxCol + 1) * [0]
-      self.secondaryCounts[i] = (self.maxCol + 1) * [0]
-  
-  def eventTick(self):
-    """
-    Called when it's time to process another event. All graphs use tor BW
-    events to keep in sync with each other (this happens once a second).
-    """
-    
-    pass
-  
-  def isNextTickRedraw(self):
-    """
-    Provides true if the following tick (call to _processEvent) will result in
-    being redrawn.
-    """
-    
-    if self._graphPanel and not self.isPauseBuffer and not self.isPaused:
-      if CONFIG["features.graph.frequentRefresh"]: return True
-      else:
-        updateRate = UPDATE_INTERVALS[self._graphPanel.updateInterval][1]
-        if (self.tick + 1) % updateRate == 0: return True
-    
-    return False
-  
-  def getTitle(self, width):
-    """
-    Provides top label.
-    """
-    
-    return ""
-  
-  def getHeaderLabel(self, width, isPrimary):
-    """
-    Provides labeling presented at the top of the graph.
-    """
-    
-    return ""
-  
-  def getColor(self, isPrimary):
-    """
-    Provides the color to be used for the graph and stats.
-    """
-    
-    return DEFAULT_COLOR_PRIMARY if isPrimary else DEFAULT_COLOR_SECONDARY
-  
-  def getPreferredHeight(self):
-    """
-    Provides the height content should take up. By default this provides the
-    space needed for the default graph and content.
-    """
-    
-    return DEFAULT_HEIGHT
-  
-  def draw(self, panel, width, height):
-    """
-    Allows for any custom drawing monitor wishes to append.
-    """
-    
-    pass
-  
-  def setPaused(self, isPause):
-    """
-    If true, prevents bandwidth updates from being presented. This is a no-op
-    if a pause buffer.
-    """
-    
-    if isPause == self.isPaused or self.isPauseBuffer: return
-    self.isPaused = isPause
-    
-    if self.isPaused: active, inactive = self._pauseBuffer, self
-    else: active, inactive = self, self._pauseBuffer
-    self._parameterSwap(active, inactive)
-  
-  def bandwidth_event(self, event):
-    self.eventTick()
-  
-  def _parameterSwap(self, active, inactive):
-    """
-    Either overwrites parameters of pauseBuffer or with the current values or
-    vice versa. This is a helper method for setPaused and should be overwritten
-    to append with additional parameters that need to be preserved when paused.
-    """
-    
-    # The pause buffer is constructed as a GraphStats instance which will
-    # become problematic if this is overridden by any implementations (which
-    # currently isn't the case). If this happens then the pause buffer will
-    # need to be of the requester's type (not quite sure how to do this
-    # gracefully...).
-    
-    active.tick = inactive.tick
-    active.lastPrimary = inactive.lastPrimary
-    active.lastSecondary = inactive.lastSecondary
-    active.primaryTotal = inactive.primaryTotal
-    active.secondaryTotal = inactive.secondaryTotal
-    active.maxPrimary = dict(inactive.maxPrimary)
-    active.maxSecondary = dict(inactive.maxSecondary)
-    active.primaryCounts = copy.deepcopy(inactive.primaryCounts)
-    active.secondaryCounts = copy.deepcopy(inactive.secondaryCounts)
-  
-  def _processEvent(self, primary, secondary):
-    """
-    Includes new stats in graphs and notifies associated GraphPanel of changes.
-    """
-    
-    if self.isPaused: self._pauseBuffer._processEvent(primary, secondary)
-    else:
-      isRedraw = self.isNextTickRedraw()
-      
-      self.lastPrimary, self.lastSecondary = primary, secondary
-      self.primaryTotal += primary
-      self.secondaryTotal += secondary
-      
-      # updates for all time intervals
-      self.tick += 1
-      for i in range(len(UPDATE_INTERVALS)):
-        lable, timescale = UPDATE_INTERVALS[i]
-        
-        self.primaryCounts[i][0] += primary
-        self.secondaryCounts[i][0] += secondary
-        
-        if self.tick % timescale == 0:
-          self.maxPrimary[i] = max(self.maxPrimary[i], self.primaryCounts[i][0] / timescale)
-          self.primaryCounts[i][0] /= timescale
-          self.primaryCounts[i].insert(0, 0)
-          del self.primaryCounts[i][self.maxCol + 1:]
-          
-          self.maxSecondary[i] = max(self.maxSecondary[i], self.secondaryCounts[i][0] / timescale)
-          self.secondaryCounts[i][0] /= timescale
-          self.secondaryCounts[i].insert(0, 0)
-          del self.secondaryCounts[i][self.maxCol + 1:]
-      
-      if isRedraw: self._graphPanel.redraw(True)
-
-class GraphPanel(panel.Panel):
-  """
-  Panel displaying a graph, drawing statistics from custom GraphStats
-  implementations.
-  """
-  
-  def __init__(self, stdscr):
-    panel.Panel.__init__(self, stdscr, "graph", 0)
-    self.updateInterval = CONFIG["features.graph.interval"]
-    self.bounds = CONFIG["features.graph.bound"]
-    self.currentDisplay = None    # label of the stats currently being displayed
-    self.stats = {}               # available stats (mappings of label -> instance)
-    self.showLabel = True         # shows top label if true, hides otherwise
-    self.isPaused = False
-  
-  def getHeight(self):
-    """
-    Provides the height requested by the currently displayed GraphStats (zero
-    if hidden).
-    """
-    
-    if self.currentDisplay:
-      return self.stats[self.currentDisplay].getPreferredHeight()
-    else: return 0
-  
-  def draw(self, subwindow, width, height):
-    """ Redraws graph panel """
-    
-    if self.currentDisplay:
-      param = self.stats[self.currentDisplay]
-      graphCol = min((width - 10) / 2, param.maxCol)
-      
-      primaryColor = uiTools.getColor(param.getColor(True))
-      secondaryColor = uiTools.getColor(param.getColor(False))
-      
-      if self.showLabel: self.addstr(0, 0, param.getTitle(width), curses.A_STANDOUT)
-      
-      # top labels
-      left, right = param.getHeaderLabel(width / 2, True), param.getHeaderLabel(width / 2, False)
-      if left: self.addstr(1, 0, left, curses.A_BOLD | primaryColor)
-      if right: self.addstr(1, graphCol + 5, right, curses.A_BOLD | secondaryColor)
-      
-      # determines max/min value on the graph
-      if self.bounds == BOUNDS_GLOBAL_MAX:
-        primaryMaxBound = param.maxPrimary[self.updateInterval]
-        secondaryMaxBound = param.maxSecondary[self.updateInterval]
-      else:
-        # both BOUNDS_LOCAL_MAX and BOUNDS_TIGHT use local maxima
-        if graphCol < 2:
-          # nothing being displayed
-          primaryMaxBound, secondaryMaxBound = 0, 0
-        else:
-          primaryMaxBound = max(param.primaryCounts[self.updateInterval][1:graphCol + 1])
-          secondaryMaxBound = max(param.secondaryCounts[self.updateInterval][1:graphCol + 1])
-      
-      primaryMinBound = secondaryMinBound = 0
-      if self.bounds == BOUNDS_TIGHT:
-        primaryMinBound = min(param.primaryCounts[self.updateInterval][1:graphCol + 1])
-        secondaryMinBound = min(param.secondaryCounts[self.updateInterval][1:graphCol + 1])
-        
-        # if the max = min (ie, all values are the same) then use zero lower
-        # bound so a graph is still displayed
-        if primaryMinBound == primaryMaxBound: primaryMinBound = 0
-        if secondaryMinBound == secondaryMaxBound: secondaryMinBound = 0
-      
-      # displays bound
-      self.addstr(2, 0, "%4i" % primaryMaxBound, primaryColor)
-      self.addstr(7, 0, "%4i" % primaryMinBound, primaryColor)
-      
-      self.addstr(2, graphCol + 5, "%4i" % secondaryMaxBound, secondaryColor)
-      self.addstr(7, graphCol + 5, "%4i" % secondaryMinBound, secondaryColor)
-      
-      # creates bar graph (both primary and secondary)
-      for col in range(graphCol):
-        colCount = param.primaryCounts[self.updateInterval][col + 1] - primaryMinBound
-        colHeight = min(5, 5 * colCount / (max(1, primaryMaxBound) - primaryMinBound))
-        for row in range(colHeight): self.addstr(7 - row, col + 5, " ", curses.A_STANDOUT | primaryColor)
-        
-        colCount = param.secondaryCounts[self.updateInterval][col + 1] - secondaryMinBound
-        colHeight = min(5, 5 * colCount / (max(1, secondaryMaxBound) - secondaryMinBound))
-        for row in range(colHeight): self.addstr(7 - row, col + graphCol + 10, " ", curses.A_STANDOUT | secondaryColor)
-      
-      # bottom labeling of x-axis
-      intervalSec = 1 # seconds per labeling
-      for i in range(len(UPDATE_INTERVALS)):
-        if i == self.updateInterval: intervalSec = UPDATE_INTERVALS[i][1]
-      
-      intervalSpacing = 10 if graphCol >= WIDE_LABELING_GRAPH_COL else 5
-      unitsLabel, decimalPrecision = None, 0
-      for i in range((graphCol - 4) / intervalSpacing):
-        loc = (i + 1) * intervalSpacing
-        timeLabel = uiTools.getTimeLabel(loc * intervalSec, decimalPrecision)
-        
-        if not unitsLabel: unitsLabel = timeLabel[-1]
-        elif unitsLabel != timeLabel[-1]:
-          # upped scale so also up precision of future measurements
-          unitsLabel = timeLabel[-1]
-          decimalPrecision += 1
-        else:
-          # if constrained on space then strips labeling since already provided
-          timeLabel = timeLabel[:-1]
-        
-        self.addstr(8, 4 + loc, timeLabel, primaryColor)
-        self.addstr(8, graphCol + 10 + loc, timeLabel, secondaryColor)
-        
-      param.draw(self, width, height) # allows current stats to modify the display
-  
-  def addStats(self, label, stats):
-    """
-    Makes GraphStats instance available in the panel.
-    """
-    
-    stats._graphPanel = self
-    stats.isPaused = True
-    self.stats[label] = stats
-  
-  def setStats(self, label):
-    """
-    Sets the currently displayed stats instance, hiding panel if None.
-    """
-    
-    if label != self.currentDisplay:
-      if self.currentDisplay: self.stats[self.currentDisplay].setPaused(True)
-      
-      if not label:
-        self.currentDisplay = None
-      elif label in self.stats.keys():
-        self.currentDisplay = label
-        self.stats[label].setPaused(self.isPaused)
-      else: raise ValueError("Unrecognized stats label: %s" % label)
-  
-  def setPaused(self, isPause):
-    """
-    If true, prevents bandwidth updates from being presented.
-    """
-    
-    if isPause == self.isPaused: return
-    self.isPaused = isPause
-    if self.currentDisplay: self.stats[self.currentDisplay].setPaused(self.isPaused)
-

Copied: arm/release/interface/graphing/graphPanel.py (from rev 22616, arm/trunk/interface/graphing/graphPanel.py)
===================================================================
--- arm/release/interface/graphing/graphPanel.py	                        (rev 0)
+++ arm/release/interface/graphing/graphPanel.py	2010-07-07 16:48:51 UTC (rev 22617)
@@ -0,0 +1,363 @@
+"""
+Flexible panel for presenting bar graphs for a variety of stats. This panel is
+just concerned with the rendering of information, which is actually collected
+and stored by implementations of the GraphStats interface. Panels are made up
+of a title, followed by headers and graphs for two sets of stats. For
+instance...
+
+Bandwidth (cap: 5 MB, burst: 10 MB):
+Downloaded (0.0 B/sec):           Uploaded (0.0 B/sec):
+  34                                30
+                            *                                 *
+                    **  *   *                          *      **
+      *   *  *      ** **   **          ***  **       ** **   **
+     *********      ******  ******     *********      ******  ******
+   0 ************ ****************   0 ************ ****************
+         25s  50   1m   1.6  2.0           25s  50   1m   1.6  2.0
+"""
+
+import copy
+import curses
+from TorCtl import TorCtl
+
+from util import panel, uiTools
+
+# time intervals at which graphs can be updated
+UPDATE_INTERVALS = [("each second", 1), ("5 seconds", 5),   ("30 seconds", 30),
+                    ("minutely", 60),   ("15 minute", 900), ("30 minute", 1800),
+                    ("hourly", 3600),   ("daily", 86400)]
+
+DEFAULT_HEIGHT = 10 # space needed for graph and content
+DEFAULT_COLOR_PRIMARY, DEFAULT_COLOR_SECONDARY = "green", "cyan"
+
+# enums for graph bounds:
+#   BOUNDS_GLOBAL_MAX - global maximum (highest value ever seen)
+#   BOUNDS_LOCAL_MAX - local maximum (highest value currently on the graph)
+#   BOUNDS_TIGHT - local maximum and minimum
+BOUNDS_GLOBAL_MAX, BOUNDS_LOCAL_MAX, BOUNDS_TIGHT = range(3)
+BOUND_LABELS = {BOUNDS_GLOBAL_MAX: "global max", BOUNDS_LOCAL_MAX: "local max", BOUNDS_TIGHT: "tight"}
+
+WIDE_LABELING_GRAPH_COL = 50  # minimum graph columns to use wide spacing for x-axis labels
+
+# used for setting defaults when initializing GraphStats and GraphPanel instances
+CONFIG = {"features.graph.interval": 0, "features.graph.bound": 1, "features.graph.maxSize": 150, "features.graph.frequentRefresh": True}
+
+def loadConfig(config):
+  config.update(CONFIG)
+  CONFIG["features.graph.interval"] = max(len(UPDATE_INTERVALS) - 1, min(0, CONFIG["features.graph.interval"]))
+  CONFIG["features.graph.bound"] = max(2, min(0, CONFIG["features.graph.bound"]))
+  CONFIG["features.graph.maxSize"] = max(CONFIG["features.graph.maxSize"], 1)
+
+class GraphStats(TorCtl.PostEventListener):
+  """
+  Module that's expected to update dynamically and provide attributes to be
+  graphed. Up to two graphs (a 'primary' and 'secondary') can be displayed at a
+  time and timescale parameters use the labels defined in UPDATE_INTERVALS.
+  """
+  
+  def __init__(self, isPauseBuffer=False):
+    """
+    Initializes parameters needed to present a graph.
+    """
+    
+    TorCtl.PostEventListener.__init__(self)
+    
+    # panel to be redrawn when updated (set when added to GraphPanel)
+    self._graphPanel = None
+    
+    # mirror instance used to track updates when paused
+    self.isPaused, self.isPauseBuffer = False, isPauseBuffer
+    if isPauseBuffer: self._pauseBuffer = None
+    else: self._pauseBuffer = GraphStats(True)
+    
+    # tracked stats
+    self.tick = 0                                 # number of processed events
+    self.lastPrimary, self.lastSecondary = 0, 0   # most recent registered stats
+    self.primaryTotal, self.secondaryTotal = 0, 0 # sum of all stats seen
+    
+    # timescale dependent stats
+    self.maxCol = CONFIG["features.graph.maxSize"]
+    self.maxPrimary, self.maxSecondary = {}, {}
+    self.primaryCounts, self.secondaryCounts = {}, {}
+    
+    for i in range(len(UPDATE_INTERVALS)):
+      # recent rates for graph
+      self.maxPrimary[i] = 0
+      self.maxSecondary[i] = 0
+      
+      # historic stats for graph, first is accumulator
+      # iterative insert needed to avoid making shallow copies (nasty, nasty gotcha)
+      self.primaryCounts[i] = (self.maxCol + 1) * [0]
+      self.secondaryCounts[i] = (self.maxCol + 1) * [0]
+  
+  def eventTick(self):
+    """
+    Called when it's time to process another event. All graphs use tor BW
+    events to keep in sync with each other (this happens once a second).
+    """
+    
+    pass
+  
+  def isNextTickRedraw(self):
+    """
+    Provides true if the following tick (call to _processEvent) will result in
+    being redrawn.
+    """
+    
+    if self._graphPanel and not self.isPauseBuffer and not self.isPaused:
+      if CONFIG["features.graph.frequentRefresh"]: return True
+      else:
+        updateRate = UPDATE_INTERVALS[self._graphPanel.updateInterval][1]
+        if (self.tick + 1) % updateRate == 0: return True
+    
+    return False
+  
+  def getTitle(self, width):
+    """
+    Provides top label.
+    """
+    
+    return ""
+  
+  def getHeaderLabel(self, width, isPrimary):
+    """
+    Provides labeling presented at the top of the graph.
+    """
+    
+    return ""
+  
+  def getColor(self, isPrimary):
+    """
+    Provides the color to be used for the graph and stats.
+    """
+    
+    return DEFAULT_COLOR_PRIMARY if isPrimary else DEFAULT_COLOR_SECONDARY
+  
+  def getPreferredHeight(self):
+    """
+    Provides the height content should take up. By default this provides the
+    space needed for the default graph and content.
+    """
+    
+    return DEFAULT_HEIGHT
+  
+  def draw(self, panel, width, height):
+    """
+    Allows for any custom drawing monitor wishes to append.
+    """
+    
+    pass
+  
+  def setPaused(self, isPause):
+    """
+    If true, prevents bandwidth updates from being presented. This is a no-op
+    if a pause buffer.
+    """
+    
+    if isPause == self.isPaused or self.isPauseBuffer: return
+    self.isPaused = isPause
+    
+    if self.isPaused: active, inactive = self._pauseBuffer, self
+    else: active, inactive = self, self._pauseBuffer
+    self._parameterSwap(active, inactive)
+  
+  def bandwidth_event(self, event):
+    self.eventTick()
+  
+  def _parameterSwap(self, active, inactive):
+    """
+    Either overwrites parameters of pauseBuffer or with the current values or
+    vice versa. This is a helper method for setPaused and should be overwritten
+    to append with additional parameters that need to be preserved when paused.
+    """
+    
+    # The pause buffer is constructed as a GraphStats instance which will
+    # become problematic if this is overridden by any implementations (which
+    # currently isn't the case). If this happens then the pause buffer will
+    # need to be of the requester's type (not quite sure how to do this
+    # gracefully...).
+    
+    active.tick = inactive.tick
+    active.lastPrimary = inactive.lastPrimary
+    active.lastSecondary = inactive.lastSecondary
+    active.primaryTotal = inactive.primaryTotal
+    active.secondaryTotal = inactive.secondaryTotal
+    active.maxPrimary = dict(inactive.maxPrimary)
+    active.maxSecondary = dict(inactive.maxSecondary)
+    active.primaryCounts = copy.deepcopy(inactive.primaryCounts)
+    active.secondaryCounts = copy.deepcopy(inactive.secondaryCounts)
+  
+  def _processEvent(self, primary, secondary):
+    """
+    Includes new stats in graphs and notifies associated GraphPanel of changes.
+    """
+    
+    if self.isPaused: self._pauseBuffer._processEvent(primary, secondary)
+    else:
+      isRedraw = self.isNextTickRedraw()
+      
+      self.lastPrimary, self.lastSecondary = primary, secondary
+      self.primaryTotal += primary
+      self.secondaryTotal += secondary
+      
+      # updates for all time intervals
+      self.tick += 1
+      for i in range(len(UPDATE_INTERVALS)):
+        lable, timescale = UPDATE_INTERVALS[i]
+        
+        self.primaryCounts[i][0] += primary
+        self.secondaryCounts[i][0] += secondary
+        
+        if self.tick % timescale == 0:
+          self.maxPrimary[i] = max(self.maxPrimary[i], self.primaryCounts[i][0] / timescale)
+          self.primaryCounts[i][0] /= timescale
+          self.primaryCounts[i].insert(0, 0)
+          del self.primaryCounts[i][self.maxCol + 1:]
+          
+          self.maxSecondary[i] = max(self.maxSecondary[i], self.secondaryCounts[i][0] / timescale)
+          self.secondaryCounts[i][0] /= timescale
+          self.secondaryCounts[i].insert(0, 0)
+          del self.secondaryCounts[i][self.maxCol + 1:]
+      
+      if isRedraw: self._graphPanel.redraw(True)
+
+class GraphPanel(panel.Panel):
+  """
+  Panel displaying a graph, drawing statistics from custom GraphStats
+  implementations.
+  """
+  
+  def __init__(self, stdscr):
+    panel.Panel.__init__(self, stdscr, "graph", 0)
+    self.updateInterval = CONFIG["features.graph.interval"]
+    self.bounds = CONFIG["features.graph.bound"]
+    self.currentDisplay = None    # label of the stats currently being displayed
+    self.stats = {}               # available stats (mappings of label -> instance)
+    self.showLabel = True         # shows top label if true, hides otherwise
+    self.isPaused = False
+  
+  def getHeight(self):
+    """
+    Provides the height requested by the currently displayed GraphStats (zero
+    if hidden).
+    """
+    
+    if self.currentDisplay:
+      return self.stats[self.currentDisplay].getPreferredHeight()
+    else: return 0
+  
+  def draw(self, subwindow, width, height):
+    """ Redraws graph panel """
+    
+    if self.currentDisplay:
+      param = self.stats[self.currentDisplay]
+      graphCol = min((width - 10) / 2, param.maxCol)
+      
+      primaryColor = uiTools.getColor(param.getColor(True))
+      secondaryColor = uiTools.getColor(param.getColor(False))
+      
+      if self.showLabel: self.addstr(0, 0, param.getTitle(width), curses.A_STANDOUT)
+      
+      # top labels
+      left, right = param.getHeaderLabel(width / 2, True), param.getHeaderLabel(width / 2, False)
+      if left: self.addstr(1, 0, left, curses.A_BOLD | primaryColor)
+      if right: self.addstr(1, graphCol + 5, right, curses.A_BOLD | secondaryColor)
+      
+      # determines max/min value on the graph
+      if self.bounds == BOUNDS_GLOBAL_MAX:
+        primaryMaxBound = param.maxPrimary[self.updateInterval]
+        secondaryMaxBound = param.maxSecondary[self.updateInterval]
+      else:
+        # both BOUNDS_LOCAL_MAX and BOUNDS_TIGHT use local maxima
+        if graphCol < 2:
+          # nothing being displayed
+          primaryMaxBound, secondaryMaxBound = 0, 0
+        else:
+          primaryMaxBound = max(param.primaryCounts[self.updateInterval][1:graphCol + 1])
+          secondaryMaxBound = max(param.secondaryCounts[self.updateInterval][1:graphCol + 1])
+      
+      primaryMinBound = secondaryMinBound = 0
+      if self.bounds == BOUNDS_TIGHT:
+        primaryMinBound = min(param.primaryCounts[self.updateInterval][1:graphCol + 1])
+        secondaryMinBound = min(param.secondaryCounts[self.updateInterval][1:graphCol + 1])
+        
+        # if the max = min (ie, all values are the same) then use zero lower
+        # bound so a graph is still displayed
+        if primaryMinBound == primaryMaxBound: primaryMinBound = 0
+        if secondaryMinBound == secondaryMaxBound: secondaryMinBound = 0
+      
+      # displays bound
+      self.addstr(2, 0, "%4i" % primaryMaxBound, primaryColor)
+      self.addstr(7, 0, "%4i" % primaryMinBound, primaryColor)
+      
+      self.addstr(2, graphCol + 5, "%4i" % secondaryMaxBound, secondaryColor)
+      self.addstr(7, graphCol + 5, "%4i" % secondaryMinBound, secondaryColor)
+      
+      # creates bar graph (both primary and secondary)
+      for col in range(graphCol):
+        colCount = param.primaryCounts[self.updateInterval][col + 1] - primaryMinBound
+        colHeight = min(5, 5 * colCount / (max(1, primaryMaxBound) - primaryMinBound))
+        for row in range(colHeight): self.addstr(7 - row, col + 5, " ", curses.A_STANDOUT | primaryColor)
+        
+        colCount = param.secondaryCounts[self.updateInterval][col + 1] - secondaryMinBound
+        colHeight = min(5, 5 * colCount / (max(1, secondaryMaxBound) - secondaryMinBound))
+        for row in range(colHeight): self.addstr(7 - row, col + graphCol + 10, " ", curses.A_STANDOUT | secondaryColor)
+      
+      # bottom labeling of x-axis
+      intervalSec = 1 # seconds per labeling
+      for i in range(len(UPDATE_INTERVALS)):
+        if i == self.updateInterval: intervalSec = UPDATE_INTERVALS[i][1]
+      
+      intervalSpacing = 10 if graphCol >= WIDE_LABELING_GRAPH_COL else 5
+      unitsLabel, decimalPrecision = None, 0
+      for i in range((graphCol - 4) / intervalSpacing):
+        loc = (i + 1) * intervalSpacing
+        timeLabel = uiTools.getTimeLabel(loc * intervalSec, decimalPrecision)
+        
+        if not unitsLabel: unitsLabel = timeLabel[-1]
+        elif unitsLabel != timeLabel[-1]:
+          # upped scale so also up precision of future measurements
+          unitsLabel = timeLabel[-1]
+          decimalPrecision += 1
+        else:
+          # if constrained on space then strips labeling since already provided
+          timeLabel = timeLabel[:-1]
+        
+        self.addstr(8, 4 + loc, timeLabel, primaryColor)
+        self.addstr(8, graphCol + 10 + loc, timeLabel, secondaryColor)
+        
+      param.draw(self, width, height) # allows current stats to modify the display
+  
+  def addStats(self, label, stats):
+    """
+    Makes GraphStats instance available in the panel.
+    """
+    
+    stats._graphPanel = self
+    stats.isPaused = True
+    self.stats[label] = stats
+  
+  def setStats(self, label):
+    """
+    Sets the currently displayed stats instance, hiding panel if None.
+    """
+    
+    if label != self.currentDisplay:
+      if self.currentDisplay: self.stats[self.currentDisplay].setPaused(True)
+      
+      if not label:
+        self.currentDisplay = None
+      elif label in self.stats.keys():
+        self.currentDisplay = label
+        self.stats[label].setPaused(self.isPaused)
+      else: raise ValueError("Unrecognized stats label: %s" % label)
+  
+  def setPaused(self, isPause):
+    """
+    If true, prevents bandwidth updates from being presented.
+    """
+    
+    if isPause == self.isPaused: return
+    self.isPaused = isPause
+    if self.currentDisplay: self.stats[self.currentDisplay].setPaused(self.isPaused)
+

Deleted: arm/release/interface/graphing/psStats.py
===================================================================
--- arm/trunk/interface/graphing/psStats.py	2010-07-07 16:44:54 UTC (rev 22616)
+++ arm/release/interface/graphing/psStats.py	2010-07-07 16:48:51 UTC (rev 22617)
@@ -1,129 +0,0 @@
-"""
-Tracks configured ps stats. If non-numeric then this fails, providing a blank
-graph. By default this provides the cpu and memory usage of the tor process.
-"""
-
-import graphPanel
-from util import log, sysTools, torTools, uiTools
-
-# number of subsequent failed queries before giving up
-FAILURE_THRESHOLD = 5
-
-# attempts to use cached results from the header panel's ps calls
-HEADER_PS_PARAM = ["%cpu", "rss", "%mem", "etime"]
-
-DEFAULT_CONFIG = {"features.graph.ps.primaryStat": "%cpu", "features.graph.ps.secondaryStat": "rss", "features.graph.ps.cachedOnly": True, "log.graph.ps.invalidStat": log.WARN, "log.graph.ps.abandon": log.WARN}
-
-class PsStats(graphPanel.GraphStats):
-  """
-  Tracks ps stats, defaulting to system resource usage (cpu and memory usage).
-  """
-  
-  def __init__(self, config=None):
-    graphPanel.GraphStats.__init__(self)
-    self.failedCount = 0      # number of subsequent failed queries
-    
-    self._config = dict(DEFAULT_CONFIG)
-    if config: config.update(self._config)
-    
-    self.queryPid = torTools.getConn().getMyPid()
-    self.queryParam = [self._config["features.graph.ps.primaryStat"], self._config["features.graph.ps.secondaryStat"]]
-    
-    # If we're getting the same stats as the header panel then issues identical
-    # queries to make use of cached results. If not, then disable cache usage.
-    if self.queryParam[0] in HEADER_PS_PARAM and self.queryParam[1] in HEADER_PS_PARAM:
-      self.queryParam = list(HEADER_PS_PARAM)
-    else: self._config["features.graph.ps.cachedOnly"] = False
-    
-    # strips any empty entries
-    while "" in self.queryParam: self.queryParam.remove("")
-    
-    self.cacheTime = 3600 if self._config["features.graph.ps.cachedOnly"] else 1
-  
-  def getTitle(self, width):
-    return "System Resources:"
-  
-  def getHeaderLabel(self, width, isPrimary):
-    avg = (self.primaryTotal if isPrimary else self.secondaryTotal) / max(1, self.tick)
-    lastAmount = self.lastPrimary if isPrimary else self.lastSecondary
-    
-    if isPrimary: statName = self._config["features.graph.ps.primaryStat"]
-    else: statName = self._config["features.graph.ps.secondaryStat"]
-    
-    # provides nice labels for failures and common stats
-    if not statName or self.failedCount >= FAILURE_THRESHOLD or not statName in self.queryParam:
-      return ""
-    elif statName == "%cpu":
-      return "CPU (%s%%, avg: %0.1f%%):" % (lastAmount, avg)
-    elif statName in ("rss", "size"):
-      # memory sizes are converted from MB to B before generating labels
-      statLabel = "Memory" if statName == "rss" else "Size"
-      usageLabel = uiTools.getSizeLabel(lastAmount * 1048576, 1)
-      avgLabel = uiTools.getSizeLabel(avg * 1048576, 1)
-      return "%s (%s, avg: %s):" % (statLabel, usageLabel, avgLabel)
-    else:
-      # generic label (first letter of stat name is capitalized)
-      statLabel = statName[0].upper() + statName[1:]
-      return "%s (%s, avg: %s):" % (statLabel, lastAmount, avg)
-  
-  def getPreferredHeight(self):
-    # hides graph if there's nothing to display (provides default otherwise)
-    # provides default height unless there's nothing to 
-    if self.queryPid and self.queryParam and self.failedCount < FAILURE_THRESHOLD:
-      return graphPanel.DEFAULT_HEIGHT
-    else: return 0
-  
-  def eventTick(self):
-    """
-    Processes a ps event.
-    """
-    
-    psResults = {} # mapping of stat names to their results
-    if self.queryPid and self.queryParam and self.failedCount < FAILURE_THRESHOLD:
-      queryCmd = "ps -p %s -o %s" % (self.queryPid, ",".join(self.queryParam))
-      psCall = sysTools.call(queryCmd, self.cacheTime, True)
-      
-      if psCall and len(psCall) == 2:
-        # ps provided results (first line is headers, second is stats)
-        stats = psCall[1].strip().split()
-        
-        if len(self.queryParam) == len(stats):
-          # we have a result to match each stat - constructs mapping
-          psResults = dict([(self.queryParam[i], stats[i]) for i in range(len(stats))])
-          self.failedCount = 0 # had a successful call - reset failure count
-      
-      if not psResults:
-        # ps call failed, if we fail too many times sequentially then abandon
-        # listing (probably due to invalid ps parameters)
-        self.failedCount += 1
-        
-        if self.failedCount == FAILURE_THRESHOLD:
-          msg = "failed several attempts to query '%s', abandoning ps graph" % queryCmd
-          log.log(self._config["log.graph.ps.abandon"], msg)
-    
-    # if something fails (no pid, ps call failed, etc) then uses last results
-    primary, secondary = self.lastPrimary, self.lastSecondary
-    
-    for isPrimary in (True, False):
-      if isPrimary: statName = self._config["features.graph.ps.primaryStat"]
-      else: statName = self._config["features.graph.ps.secondaryStat"]
-      
-      if statName in psResults:
-        try:
-          result = float(psResults[statName])
-          
-          # The 'rss' and 'size' parameters provide memory usage in KB. This is
-          # scaled up to MB so the graph's y-high is a reasonable value.
-          if statName in ("rss", "size"): result /= 1024.0
-          
-          if isPrimary: primary = result
-          else: secondary = result
-        except ValueError:
-          if self.queryParam != HEADER_PS_PARAM:
-            # custom stat provides non-numeric results - give a warning and stop querying it
-            msg = "unable to use non-numeric ps stat '%s' for graphing" % statName
-            log.log(self._config["log.graph.ps.invalidStat"], msg)
-            self.queryParam.remove(statName)
-    
-    self._processEvent(primary, secondary)
-

Copied: arm/release/interface/graphing/psStats.py (from rev 22616, arm/trunk/interface/graphing/psStats.py)
===================================================================
--- arm/release/interface/graphing/psStats.py	                        (rev 0)
+++ arm/release/interface/graphing/psStats.py	2010-07-07 16:48:51 UTC (rev 22617)
@@ -0,0 +1,129 @@
+"""
+Tracks configured ps stats. If non-numeric then this fails, providing a blank
+graph. By default this provides the cpu and memory usage of the tor process.
+"""
+
+import graphPanel
+from util import log, sysTools, torTools, uiTools
+
+# number of subsequent failed queries before giving up
+FAILURE_THRESHOLD = 5
+
+# attempts to use cached results from the header panel's ps calls
+HEADER_PS_PARAM = ["%cpu", "rss", "%mem", "etime"]
+
+DEFAULT_CONFIG = {"features.graph.ps.primaryStat": "%cpu", "features.graph.ps.secondaryStat": "rss", "features.graph.ps.cachedOnly": True, "log.graph.ps.invalidStat": log.WARN, "log.graph.ps.abandon": log.WARN}
+
+class PsStats(graphPanel.GraphStats):
+  """
+  Tracks ps stats, defaulting to system resource usage (cpu and memory usage).
+  """
+  
+  def __init__(self, config=None):
+    graphPanel.GraphStats.__init__(self)
+    self.failedCount = 0      # number of subsequent failed queries
+    
+    self._config = dict(DEFAULT_CONFIG)
+    if config: config.update(self._config)
+    
+    self.queryPid = torTools.getConn().getMyPid()
+    self.queryParam = [self._config["features.graph.ps.primaryStat"], self._config["features.graph.ps.secondaryStat"]]
+    
+    # If we're getting the same stats as the header panel then issues identical
+    # queries to make use of cached results. If not, then disable cache usage.
+    if self.queryParam[0] in HEADER_PS_PARAM and self.queryParam[1] in HEADER_PS_PARAM:
+      self.queryParam = list(HEADER_PS_PARAM)
+    else: self._config["features.graph.ps.cachedOnly"] = False
+    
+    # strips any empty entries
+    while "" in self.queryParam: self.queryParam.remove("")
+    
+    self.cacheTime = 3600 if self._config["features.graph.ps.cachedOnly"] else 1
+  
+  def getTitle(self, width):
+    return "System Resources:"
+  
+  def getHeaderLabel(self, width, isPrimary):
+    avg = (self.primaryTotal if isPrimary else self.secondaryTotal) / max(1, self.tick)
+    lastAmount = self.lastPrimary if isPrimary else self.lastSecondary
+    
+    if isPrimary: statName = self._config["features.graph.ps.primaryStat"]
+    else: statName = self._config["features.graph.ps.secondaryStat"]
+    
+    # provides nice labels for failures and common stats
+    if not statName or self.failedCount >= FAILURE_THRESHOLD or not statName in self.queryParam:
+      return ""
+    elif statName == "%cpu":
+      return "CPU (%s%%, avg: %0.1f%%):" % (lastAmount, avg)
+    elif statName in ("rss", "size"):
+      # memory sizes are converted from MB to B before generating labels
+      statLabel = "Memory" if statName == "rss" else "Size"
+      usageLabel = uiTools.getSizeLabel(lastAmount * 1048576, 1)
+      avgLabel = uiTools.getSizeLabel(avg * 1048576, 1)
+      return "%s (%s, avg: %s):" % (statLabel, usageLabel, avgLabel)
+    else:
+      # generic label (first letter of stat name is capitalized)
+      statLabel = statName[0].upper() + statName[1:]
+      return "%s (%s, avg: %s):" % (statLabel, lastAmount, avg)
+  
+  def getPreferredHeight(self):
+    # hides graph if there's nothing to display (provides default otherwise)
+    # provides default height unless there's nothing to 
+    if self.queryPid and self.queryParam and self.failedCount < FAILURE_THRESHOLD:
+      return graphPanel.DEFAULT_HEIGHT
+    else: return 0
+  
+  def eventTick(self):
+    """
+    Processes a ps event.
+    """
+    
+    psResults = {} # mapping of stat names to their results
+    if self.queryPid and self.queryParam and self.failedCount < FAILURE_THRESHOLD:
+      queryCmd = "ps -p %s -o %s" % (self.queryPid, ",".join(self.queryParam))
+      psCall = sysTools.call(queryCmd, self.cacheTime, True)
+      
+      if psCall and len(psCall) == 2:
+        # ps provided results (first line is headers, second is stats)
+        stats = psCall[1].strip().split()
+        
+        if len(self.queryParam) == len(stats):
+          # we have a result to match each stat - constructs mapping
+          psResults = dict([(self.queryParam[i], stats[i]) for i in range(len(stats))])
+          self.failedCount = 0 # had a successful call - reset failure count
+      
+      if not psResults:
+        # ps call failed, if we fail too many times sequentially then abandon
+        # listing (probably due to invalid ps parameters)
+        self.failedCount += 1
+        
+        if self.failedCount == FAILURE_THRESHOLD:
+          msg = "failed several attempts to query '%s', abandoning ps graph" % queryCmd
+          log.log(self._config["log.graph.ps.abandon"], msg)
+    
+    # if something fails (no pid, ps call failed, etc) then uses last results
+    primary, secondary = self.lastPrimary, self.lastSecondary
+    
+    for isPrimary in (True, False):
+      if isPrimary: statName = self._config["features.graph.ps.primaryStat"]
+      else: statName = self._config["features.graph.ps.secondaryStat"]
+      
+      if statName in psResults:
+        try:
+          result = float(psResults[statName])
+          
+          # The 'rss' and 'size' parameters provide memory usage in KB. This is
+          # scaled up to MB so the graph's y-high is a reasonable value.
+          if statName in ("rss", "size"): result /= 1024.0
+          
+          if isPrimary: primary = result
+          else: secondary = result
+        except ValueError:
+          if self.queryParam != HEADER_PS_PARAM:
+            # custom stat provides non-numeric results - give a warning and stop querying it
+            msg = "unable to use non-numeric ps stat '%s' for graphing" % statName
+            log.log(self._config["log.graph.ps.invalidStat"], msg)
+            self.queryParam.remove(statName)
+    
+    self._processEvent(primary, secondary)
+

Modified: arm/release/interface/headerPanel.py
===================================================================
--- arm/release/interface/headerPanel.py	2010-07-07 16:44:54 UTC (rev 22616)
+++ arm/release/interface/headerPanel.py	2010-07-07 16:48:51 UTC (rev 22617)
@@ -1,272 +1,349 @@
-#!/usr/bin/env python
-# summaryPanel.py -- Static system and Tor related information.
-# Released under the GPL v3 (http://www.gnu.org/licenses/gpl.html)
+"""
+Top panel for every page, containing basic system and tor related information.
+If there's room available then this expands to present its information in two
+columns, otherwise it's laid out as follows:
+  arm - <hostname> (<os> <sys/version>)         Tor <tor/version> (<new, old, recommended, etc>)
+  <nickname> - <address>:<orPort>, [Dir Port: <dirPort>, ]Control Port (<open, password, cookie>): <controlPort>
+  cpu: <cpu%> mem: <mem> (<mem%>) uid: <uid> uptime: <upmin>:<upsec>
+  fingerprint: <fingerprint>
 
+Example:
+  arm - odin (Linux 2.6.24-24-generic)         Tor 0.2.1.19 (recommended)
+  odin - 76.104.132.98:9001, Dir Port: 9030, Control Port (cookie): 9051
+  cpu: 14.6%    mem: 42 MB (4.2%)    pid: 20060   uptime: 48:27
+  fingerprint: BDAD31F6F318E0413833E8EBDA956F76E4D66788
+"""
+
 import os
 import time
-import socket
-from TorCtl import TorCtl
+import threading
 
-from util import panel, uiTools
+from util import panel, sysTools, torTools, uiTools
 
 # minimum width for which panel attempts to double up contents (two columns to
 # better use screen real estate)
-MIN_DUAL_ROW_WIDTH = 140
+MIN_DUAL_COL_WIDTH = 141
 
 FLAG_COLORS = {"Authority": "white",  "BadExit": "red",     "BadDirectory": "red",    "Exit": "cyan",
                "Fast": "yellow",      "Guard": "green",     "HSDir": "magenta",       "Named": "blue",
                "Stable": "blue",      "Running": "yellow",  "Unnamed": "magenta",     "Valid": "green",
                "V2Dir": "cyan",       "V3Dir": "white"}
 
-VERSION_STATUS_COLORS = {"new": "blue",      "new in series": "blue",  "recommended": "green",  "old": "red",
-                         "obsolete": "red",  "unrecommended": "red",   "unknown": "cyan"}
+VERSION_STATUS_COLORS = {"new": "blue", "new in series": "blue", "obsolete": "red", "recommended": "green",  
+                         "old": "red",  "unrecommended": "red",  "unknown": "cyan"}
 
-class HeaderPanel(panel.Panel):
+DEFAULT_CONFIG = {"queries.ps.rate": 5}
+
+class HeaderPanel(panel.Panel, threading.Thread):
   """
-  Draws top area containing static information.
+  Top area contenting tor settings and system information. Stats are stored in
+  the vals mapping, keys including:
+    tor/ version, versionStatus, nickname, orPort, dirPort, controlPort,
+         exitPolicy, isAuthPassword (bool), isAuthCookie (bool)
+         *address, *fingerprint, *flags
+    sys/ hostname, os, version
+    ps/  *%cpu, *rss, *%mem, pid, *etime
   
-  arm - <System Name> (<OS> <Version>)         Tor <Tor Version>
-  <Relay Nickname> - <IP Addr>:<ORPort>, [Dir Port: <DirPort>, ]Control Port (<open, password, cookie>): <ControlPort>
-  cpu: <cpu%> mem: <mem> (<mem%>) uid: <uid> uptime: <upmin>:<upsec>
-  fingerprint: <Fingerprint>
-  
-  Example:
-  arm - odin (Linux 2.6.24-24-generic)         Tor 0.2.1.15-rc
-  odin - 76.104.132.98:9001, Dir Port: 9030, Control Port (cookie): 9051
-  cpu: 14.6%    mem: 42 MB (4.2%)    pid: 20060   uptime: 48:27
-  fingerprint: BDAD31F6F318E0413833E8EBDA956F76E4D66788
+  * volatile parameter that'll be reset on each update
   """
   
-  def __init__(self, stdscr, conn, torPid):
-    panel.Panel.__init__(self, stdscr, 0, 6)
-    self.vals = {"pid": torPid}     # mapping of information to be presented
-    self.conn = conn                # Tor control port connection
-    self.isPaused = False
-    self.isWide = False             # doubles up parameters to shorten section if room's available
-    self.rightParamX = 0            # offset used for doubled up parameters
-    self.lastUpdate = -1            # time last stats was retrived
-    self._updateParams()
-    self.getPreferredSize() # hack to force properly initialize size (when using wide version)
-  
-  def getPreferredSize(self):
-    # width partially determines height (panel has two layouts)
-    panelHeight, panelWidth = panel.Panel.getPreferredSize(self)
-    self.isWide = panelWidth >= MIN_DUAL_ROW_WIDTH
-    self.rightParamX = max(panelWidth / 2, 75) if self.isWide else 0
-    self.setHeight(4 if self.isWide else 6)
-    return panel.Panel.getPreferredSize(self)
-  
-  def draw(self, subwindow, width, height):
-    if not self.isPaused: self._updateParams()
+  def __init__(self, stdscr, config=None):
+    panel.Panel.__init__(self, stdscr, "header", 0)
+    threading.Thread.__init__(self)
+    self.setDaemon(True)
     
-    # TODO: remove after a few revisions if this issue can't be reproduced
-    #   (seemed to be a freak ui problem...)
+    self._isTorConnected = True
+    self._lastUpdate = -1       # time the content was last revised
+    self._isLastDrawWide = False
+    self._isChanged = False     # new stats to be drawn if true
+    self._isPaused = False      # prevents updates if true
+    self._halt = False          # terminates thread if true
+    self._cond = threading.Condition()  # used for pausing the thread
+    self._config = dict(DEFAULT_CONFIG)
     
-    # extra erase/refresh is needed to avoid internal caching screwing up and
-    # refusing to redisplay content in the case of graphical glitches - probably
-    # an obscure curses bug...
-    #self.win.erase()
-    #self.win.refresh()
+    if config:
+      config.update(self._config)
+      self._config["queries.ps.rate"] = max(self._config["queries.ps.rate"], 1)
     
-    #self.clear()
+    self.vals = {}
+    self.valsLock = threading.RLock()
+    self._update(True)
     
-    # Line 1 (system and tor version information)
-    systemNameLabel = "arm - %s " % self.vals["sys-name"]
-    systemVersionLabel = "%s %s" % (self.vals["sys-os"], self.vals["sys-version"])
+    # listens for tor reload (sighup) events
+    torTools.getConn().addStatusListener(self.resetListener)
+  
+  def getHeight(self):
+    """
+    Provides the height of the content, which is dynamically determined by the
+    panel's maximum width.
+    """
     
-    # wraps systemVersionLabel in parentheses and truncates if too long
-    versionLabelMaxWidth = 40 - len(systemNameLabel)
-    if len(systemNameLabel) > 40:
-      # we only have room for the system name label
-      systemNameLabel = systemNameLabel[:39] + "..."
-      systemVersionLabel = ""
-    elif len(systemVersionLabel) > versionLabelMaxWidth:
-      # not enough room to show full version
-      systemVersionLabel = "(%s...)" % systemVersionLabel[:versionLabelMaxWidth - 3].strip()
-    else:
-      # enough room for everything
-      systemVersionLabel = "(%s)" % systemVersionLabel
+    isWide = self.getParent().getmaxyx()[1] >= MIN_DUAL_COL_WIDTH
+    return 4 if isWide else 6
+  
+  def draw(self, subwindow, width, height):
+    self.valsLock.acquire()
+    isWide = width + 1 >= MIN_DUAL_COL_WIDTH
     
-    self.addstr(0, 0, "%s%s" % (systemNameLabel, systemVersionLabel))
+    # space available for content
+    if isWide:
+      leftWidth = max(width / 2, 77)
+      rightWidth = width - leftWidth
+    else: leftWidth = rightWidth = width
     
-    versionStatus = self.vals["status/version/current"]
-    versionColor = VERSION_STATUS_COLORS[versionStatus] if versionStatus in VERSION_STATUS_COLORS else "white"
+    # Line 1 / Line 1 Left (system and tor version information)
+    sysNameLabel = "arm - %s" % self.vals["sys/hostname"]
+    contentSpace = min(leftWidth, 40)
     
-    # truncates torVersionLabel if too long
-    torVersionLabel = self.vals["version"]
-    versionLabelMaxWidth =  (self.rightParamX if self.isWide else width) - 51 - len(versionStatus)
-    if len(torVersionLabel) > versionLabelMaxWidth:
-      torVersionLabel = torVersionLabel[:versionLabelMaxWidth - 1].strip() + "-"
+    if len(sysNameLabel) + 10 <= contentSpace:
+      sysTypeLabel = "%s %s" % (self.vals["sys/os"], self.vals["sys/version"])
+      sysTypeLabel = uiTools.cropStr(sysTypeLabel, contentSpace - len(sysNameLabel) - 3, 4)
+      self.addstr(0, 0, "%s (%s)" % (sysNameLabel, sysTypeLabel))
+    else:
+      self.addstr(0, 0, uiTools.cropStr(sysNameLabel, contentSpace))
     
-    self.addfstr(0, 43, "Tor %s (<%s>%s</%s>)" % (torVersionLabel, versionColor, versionStatus, versionColor))
+    contentSpace = leftWidth - 43
+    if 7 + len(self.vals["tor/version"]) + len(self.vals["tor/versionStatus"]) <= contentSpace:
+      versionColor = VERSION_STATUS_COLORS[self.vals["tor/versionStatus"]] if \
+          self.vals["tor/versionStatus"] in VERSION_STATUS_COLORS else "white"
+      versionStatusMsg = "<%s>%s</%s>" % (versionColor, self.vals["tor/versionStatus"], versionColor)
+      self.addfstr(0, 43, "Tor %s (%s)" % (self.vals["tor/version"], versionStatusMsg))
+    elif 11 <= contentSpace:
+      self.addstr(0, 43, uiTools.cropStr("Tor %s" % self.vals["tor/version"], contentSpace, 4))
     
-    # Line 2 (authentication label red if open, green if credentials required)
-    dirPortLabel = "Dir Port: %s, " % self.vals["DirPort"] if self.vals["DirPort"] != "0" else ""
+    # Line 2 / Line 2 Left (tor ip/port information)
+    entry = ""
+    dirPortLabel = ", Dir Port: %s" % self.vals["tor/dirPort"] if self.vals["tor/dirPort"] != "0" else ""
+    for label in (self.vals["tor/nickname"], " - " + self.vals["tor/address"], ":" + self.vals["tor/orPort"], dirPortLabel):
+      if len(entry) + len(label) <= leftWidth: entry += label
+      else: break
     
-    if self.vals["IsPasswordAuthSet"]: controlPortAuthLabel = "password"
-    elif self.vals["IsCookieAuthSet"]: controlPortAuthLabel = "cookie"
-    else: controlPortAuthLabel = "open"
-    controlPortAuthColor = "red" if controlPortAuthLabel == "open" else "green"
+    if self.vals["tor/isAuthPassword"]: authType = "password"
+    elif self.vals["tor/isAuthCookie"]: authType = "cookie"
+    else: authType = "open"
     
-    labelStart = "%s - %s:%s, %sControl Port (" % (self.vals["Nickname"], self.vals["address"], self.vals["ORPort"], dirPortLabel)
-    self.addfstr(1, 0, "%s<%s>%s</%s>): %s" % (labelStart, controlPortAuthColor, controlPortAuthLabel, controlPortAuthColor, self.vals["ControlPort"]))
+    if len(entry) + 19 + len(self.vals["tor/controlPort"]) + len(authType) <= leftWidth:
+      authColor = "red" if authType == "open" else "green"
+      authLabel = "<%s>%s</%s>" % (authColor, authType, authColor)
+      self.addfstr(1, 0, "%s, Control Port (%s): %s" % (entry, authLabel, self.vals["tor/controlPort"]))
+    elif len(entry) + 16 + len(self.vals["tor/controlPort"]) <= leftWidth:
+      self.addstr(1, 0, "%s, Control Port: %s" % (entry, self.vals["tor/controlPort"]))
+    else: self.addstr(1, 0, entry)
     
-    # Line 3 (system usage info) - line 1 right if wide
-    y, x = (0, self.rightParamX) if self.isWide else (2, 0)
-    self.addstr(y, x, "cpu: %s%%" % self.vals["%cpu"])
-    self.addstr(y, x + 13, "mem: %s (%s%%)" % (uiTools.getSizeLabel(int(self.vals["rss"]) * 1024), self.vals["%mem"]))
-    self.addstr(y, x + 34, "pid: %s" % (self.vals["pid"] if self.vals["etime"] else ""))
-    self.addstr(y, x + 47, "uptime: %s" % self.vals["etime"])
+    # Line 3 / Line 1 Right (system usage info)
+    y, x = (0, leftWidth) if isWide else (2, 0)
+    if self.vals["ps/rss"] != "0": memoryLabel = uiTools.getSizeLabel(int(self.vals["ps/rss"]) * 1024)
+    else: memoryLabel = "0"
     
-    # Line 4 (fingerprint) - line 2 right if wide
-    y, x = (1, self.rightParamX) if self.isWide else (3, 0)
-    self.addstr(y, x, "fingerprint: %s" % self.vals["fingerprint"])
+    sysFields = ((0, "cpu: %s%%" % self.vals["ps/%cpu"]),
+                 (13, "mem: %s (%s%%)" % (memoryLabel, self.vals["ps/%mem"])),
+                 (34, "pid: %s" % (self.vals["ps/pid"] if self._isTorConnected else "")),
+                 (47, "uptime: %s" % self.vals["ps/etime"]))
     
-    # Line 5 (flags) - line 3 left if wide
-    flagLine = "flags: "
-    for flag in self.vals["flags"]:
-      flagColor = FLAG_COLORS[flag] if flag in FLAG_COLORS.keys() else "white"
-      flagLine += "<b><%s>%s</%s></b>, " % (flagColor, flag, flagColor)
+    for (start, label) in sysFields:
+      if start + len(label) <= rightWidth: self.addstr(y, x + start, label)
+      else: break
     
-    if len(self.vals["flags"]) > 0: flagLine = flagLine[:-2]
-    self.addfstr(2 if self.isWide else 4, 0, flagLine)
+    # Line 4 / Line 2 Right (fingerprint)
+    y, x = (1, leftWidth) if isWide else (3, 0)
+    self.addstr(y, x, "fingerprint: %s" % self.vals["tor/fingerprint"])
     
-    # Line 3 right (exit policy) - only present if wide
-    if self.isWide:
-      exitPolicy = self.vals["ExitPolicy"]
+    # Line 5 / Line 3 Left (flags)
+    if self._isTorConnected:
+      flagLine = "flags: "
+      for flag in self.vals["tor/flags"]:
+        flagColor = FLAG_COLORS[flag] if flag in FLAG_COLORS.keys() else "white"
+        flagLine += "<b><%s>%s</%s></b>, " % (flagColor, flag, flagColor)
       
+      if len(self.vals["tor/flags"]) > 0: flagLine = flagLine[:-2]
+      else: flagLine += "<b><cyan>none</cyan></b>"
+      
+      self.addfstr(2 if isWide else 4, 0, flagLine)
+    else:
+      statusTime = torTools.getConn().getStatus()[1]
+      statusTimeLabel = time.strftime("%H:%M %m/%d/%Y", time.localtime(statusTime))
+      self.addfstr(2 if isWide else 4, 0, "<b><red>Tor Disconnected</red></b> (%s)" % statusTimeLabel)
+    
+    # Undisplayed / Line 3 Right (exit policy)
+    if isWide:
+      exitPolicy = self.vals["tor/exitPolicy"]
+      
       # adds note when default exit policy is appended
-      # TODO: the following catch-all policies arne't quite exhaustive
       if exitPolicy == None: exitPolicy = "<default>"
-      elif not (exitPolicy.endswith("accept *:*") or exitPolicy.endswith("accept *")) and not (exitPolicy.endswith("reject *:*") or exitPolicy.endswith("reject *")):
-        exitPolicy += ", <default>"
+      elif not exitPolicy.endswith((" *:*", " *")): exitPolicy += ", <default>"
       
+      # color codes accepts to be green, rejects to be red, and default marker to be cyan
+      isSimple = len(exitPolicy) > rightWidth - 13
       policies = exitPolicy.split(", ")
-      
-      # color codes accepts to be green, rejects to be red, and default marker to be cyan
-      # TODO: instead base this on if there's space available for the full verbose version
-      isSimple = len(policies) <= 2 # if policy is short then it's kept verbose, otherwise 'accept' and 'reject' keywords removed
       for i in range(len(policies)):
         policy = policies[i].strip()
-        displayedPolicy = policy if isSimple else policy.replace("accept", "").replace("reject", "").strip()
+        displayedPolicy = policy.replace("accept", "").replace("reject", "").strip() if isSimple else policy
         if policy.startswith("accept"): policy = "<green><b>%s</b></green>" % displayedPolicy
         elif policy.startswith("reject"): policy = "<red><b>%s</b></red>" % displayedPolicy
         elif policy.startswith("<default>"): policy = "<cyan><b>%s</b></cyan>" % displayedPolicy
         policies[i] = policy
-      exitPolicy = ", ".join(policies)
       
-      self.addfstr(2, self.rightParamX, "exit policy: %s" % exitPolicy)
+      self.addfstr(2, leftWidth, "exit policy: %s" % ", ".join(policies))
+    
+    self._isLastDrawWide = isWide
+    self._isChanged = False
+    self.valsLock.release()
   
+  def redraw(self, forceRedraw=False, block=False):
+    # determines if the content needs to be redrawn or not
+    isWide = self.getParent().getmaxyx()[1] >= MIN_DUAL_COL_WIDTH
+    panel.Panel.redraw(self, forceRedraw or self._isChanged or isWide != self._isLastDrawWide, block)
+  
   def setPaused(self, isPause):
     """
     If true, prevents updates from being presented.
     """
     
-    self.isPaused = isPause
+    self._isPaused = isPause
   
-  def _updateParams(self, forceReload = False):
+  def run(self):
     """
-    Updates mapping of static Tor settings and system information to their
-    corresponding string values. Keys include:
-    info - version, *address, *fingerprint, *flags, status/version/current
-    sys - sys-name, sys-os, sys-version
-    ps - *%cpu, *rss, *%mem, *pid, *etime
-    config - Nickname, ORPort, DirPort, ControlPort, ExitPolicy
-    config booleans - IsPasswordAuthSet, IsCookieAuthSet, IsAccountingEnabled
+    Keeps stats updated, querying new information at a set rate.
+    """
     
-    * volatile parameter that'll be reset (otherwise won't be checked if
-    already set)
+    while not self._halt:
+      timeSinceReset = time.time() - self._lastUpdate
+      psRate = self._config["queries.ps.rate"]
+      
+      if self._isPaused or timeSinceReset < psRate or not self._isTorConnected:
+        sleepTime = max(0.5, psRate - timeSinceReset)
+        self._cond.acquire()
+        if not self._halt: self._cond.wait(sleepTime)
+        self._cond.release()
+      else:
+        self._update()
+        self.redraw()
+  
+  def stop(self):
     """
+    Halts further resolutions and terminates the thread.
+    """
     
-    infoFields = ["address", "fingerprint"] # keys for which get_info will be called
-    if len(self.vals) <= 1 or forceReload:
-      lookupFailed = False
+    self._cond.acquire()
+    self._halt = True
+    self._cond.notifyAll()
+    self._cond.release()
+  
+  def resetListener(self, conn, eventType):
+    """
+    Updates static parameters on tor reload (sighup) events.
+    
+    Arguments:
+      conn      - tor controller
+      eventType - type of event detected
+    """
+    
+    if eventType == torTools.TOR_INIT:
+      self._isTorConnected = True
+      self._update(True)
+      self.redraw()
+    elif eventType == torTools.TOR_CLOSED:
+      self._isTorConnected = False
+      self._update()
+      self.redraw(True)
+  
+  def _update(self, setStatic=False):
+    """
+    Updates stats in the vals mapping. By default this just revises volatile
+    attributes.
+    
+    Arguments:
+      setStatic - resets all parameters, including relatively static values
+    """
+    
+    self.valsLock.acquire()
+    conn = torTools.getConn()
+    
+    if setStatic:
+      # version is truncated to first part, for instance:
+      # 0.2.2.13-alpha (git-feb8c1b5f67f2c6f) -> 0.2.2.13-alpha
+      self.vals["tor/version"] = conn.getInfo("version", "Unknown").split()[0]
+      self.vals["tor/versionStatus"] = conn.getInfo("status/version/current", "Unknown")
+      self.vals["tor/nickname"] = conn.getOption("Nickname", "")
+      self.vals["tor/orPort"] = conn.getOption("ORPort", "")
+      self.vals["tor/dirPort"] = conn.getOption("DirPort", "0")
+      self.vals["tor/controlPort"] = conn.getOption("ControlPort", "")
+      self.vals["tor/isAuthPassword"] = conn.getOption("HashedControlPassword") != None
+      self.vals["tor/isAuthCookie"] = conn.getOption("CookieAuthentication") == "1"
       
-      # first call (only contasns 'pid' mapping) - retrieve static params
-      infoFields += ["version", "status/version/current"]
+      # overwrite address if ORListenAddress is set (and possibly orPort too)
+      self.vals["tor/address"] = "Unknown"
+      listenAddr = conn.getOption("ORListenAddress")
+      if listenAddr:
+        if ":" in listenAddr:
+          # both ip and port overwritten
+          self.vals["tor/address"] = listenAddr[:listenAddr.find(":")]
+          self.vals["tor/orPort"] = listenAddr[listenAddr.find(":") + 1:]
+        else:
+          self.vals["tor/address"] = listenAddr
       
-      # populates with some basic system information
+      # fetch exit policy (might span over multiple lines)
+      policyEntries = []
+      for exitPolicy in conn.getOption("ExitPolicy", [], True):
+        policyEntries += [policy.strip() for policy in exitPolicy.split(",")]
+      self.vals["tor/exitPolicy"] = ", ".join(policyEntries)
+      
+      # system information
       unameVals = os.uname()
-      self.vals["sys-name"] = unameVals[1]
-      self.vals["sys-os"] = unameVals[0]
-      self.vals["sys-version"] = unameVals[2]
+      self.vals["sys/hostname"] = unameVals[1]
+      self.vals["sys/os"] = unameVals[0]
+      self.vals["sys/version"] = unameVals[2]
       
-      try:
-        # parameters from the user's torrc
-        configFields = ["Nickname", "ORPort", "DirPort", "ControlPort"]
-        self.vals.update(dict([(key, self.conn.get_option(key)[0][1]) for key in configFields]))
-        
-        # fetch exit policy (might span over multiple lines)
-        exitPolicyEntries = []
-        for (key, value) in self.conn.get_option("ExitPolicy"):
-          if value: exitPolicyEntries.append(value)
-        
-        self.vals["ExitPolicy"] = ", ".join(exitPolicyEntries)
-        
-        # simply keeps booleans for if authentication info is set
-        self.vals["IsPasswordAuthSet"] = not self.conn.get_option("HashedControlPassword")[0][1] == None
-        self.vals["IsCookieAuthSet"] = self.conn.get_option("CookieAuthentication")[0][1] == "1"
-        self.vals["IsAccountingEnabled"] = self.conn.get_info('accounting/enabled')['accounting/enabled'] == "1"
-      except (socket.error, TorCtl.ErrorReply, TorCtl.TorCtlClosed): lookupFailed = True
+      pid = conn.getMyPid()
+      self.vals["ps/pid"] = pid if pid else ""
       
-      if lookupFailed:
-        # tor connection closed or gave error - keep old values if available, otherwise set to empty string / false
-        for field in configFields:
-          if field not in self.vals: self.vals[field] = ""
-        
-        for field in ["IsPasswordAuthSet", "IsCookieAuthSet", "IsAccountingEnabled"]:
-          if field not in self.vals: self.vals[field] = False
-      
-    # gets parameters that throw errors if unavailable
-    for param in infoFields:
-      try: self.vals.update(self.conn.get_info(param))
-      except TorCtl.ErrorReply: self.vals[param] = "Unknown"
-      except (TorCtl.TorCtlClosed, socket.error):
-        # Tor shut down or crashed - keep last known values
-        if not self.vals[param]: self.vals[param] = "Unknown"
+      # reverts volatile parameters to defaults
+      self.vals["tor/fingerprint"] = "Unknown"
+      self.vals["tor/flags"] = []
+      self.vals["ps/%cpu"] = "0"
+      self.vals["ps/rss"] = "0"
+      self.vals["ps/%mem"] = "0"
+      self.vals["ps/etime"] = ""
     
-    # if ORListenAddress is set overwrites 'address' (and possibly ORPort)
-    try:
-      listenAddr = self.conn.get_option("ORListenAddress")[0][1]
-      if listenAddr:
-        if ":" in listenAddr:
-          # both ip and port overwritten
-          self.vals["address"] = listenAddr[:listenAddr.find(":")]
-          self.vals["ORPort"] = listenAddr[listenAddr.find(":") + 1:]
-        else:
-          self.vals["address"] = listenAddr
-    except (socket.error, TorCtl.ErrorReply, TorCtl.TorCtlClosed): pass
+    # sets volatile parameters
+    volatile = {}
     
-    # flags held by relay
-    self.vals["flags"] = []
-    if self.vals["fingerprint"] != "Unknown":
-      try:
-        nsCall = self.conn.get_network_status("id/%s" % self.vals["fingerprint"])
-        if nsCall: self.vals["flags"] = nsCall[0].flags
-        else: raise TorCtl.ErrorReply # network consensus couldn't be fetched
-      except (socket.error, TorCtl.ErrorReply, TorCtl.TorCtlClosed): pass
+    # TODO: This can change, being reported by STATUS_SERVER -> EXTERNAL_ADDRESS
+    # events. Introduce caching via torTools?
+    if self.vals["tor/address"] == "Unknown":
+      volatile["tor/address"] = conn.getInfo("address", self.vals["tor/address"])
     
+    volatile["tor/fingerprint"] = conn.getMyFingerprint(self.vals["tor/fingerprint"])
+    volatile["tor/flags"] = conn.getMyFlags(self.vals["tor/flags"])
+    
+    # ps derived stats
     psParams = ["%cpu", "rss", "%mem", "etime"]
-    if self.vals["pid"]:
-      # ps call provides header followed by params for tor
-      psCall = os.popen('ps -p %s -o %s  2> /dev/null' % (self.vals["pid"], ",".join(psParams)))
+    if self.vals["ps/pid"]:
+      # if call fails then everything except etime are zeroed out (most likely
+      # tor's no longer running)
+      volatile["ps/%cpu"] = "0"
+      volatile["ps/rss"] = "0"
+      volatile["ps/%mem"] = "0"
       
-      try: sampling = psCall.read().strip().split()[len(psParams):]
-      except IOError: sampling = [] # ps call failed
-      psCall.close()
-    else:
-      sampling = [] # no pid known - blank fields
-    
-    if len(sampling) < 4:
-      # either ps failed or returned no tor instance, blank information except runtime
-      if "etime" in self.vals: sampling = [""] * (len(psParams) - 1) + [self.vals["etime"]]
-      else: sampling = [""] * len(psParams)
+      # the ps call formats results as:
+      # %CPU   RSS %MEM     ELAPSED
+      # 0.3 14096  1.3       29:51
+      psRate = self._config["queries.ps.rate"]
+      psCall = sysTools.call("ps -p %s -o %s" % (self.vals["ps/pid"], ",".join(psParams)), psRate, True)
       
-      # %cpu, rss, and %mem are better zeroed out
-      for i in range(3): sampling[i] = "0"
+      if psCall and len(psCall) >= 2:
+        stats = psCall[1].strip().split()
+        
+        if len(stats) == len(psParams):
+          for i in range(len(psParams)):
+            volatile["ps/" + psParams[i]] = stats[i]
     
-    for i in range(len(psParams)):
-      self.vals[psParams[i]] = sampling[i]
+    # checks if any changes have been made and merges volatile into vals
+    self._isChanged |= setStatic
+    for key, val in volatile.items():
+      self._isChanged |= self.vals[key] != val
+      self.vals[key] = val
     
-    self.lastUpdate = time.time()
+    self._lastUpdate = time.time()
+    self.valsLock.release()
 

Modified: arm/release/interface/logPanel.py
===================================================================
--- arm/release/interface/logPanel.py	2010-07-07 16:44:54 UTC (rev 22616)
+++ arm/release/interface/logPanel.py	2010-07-07 16:48:51 UTC (rev 22617)
@@ -2,13 +2,12 @@
 # logPanel.py -- Resources related to Tor event monitoring.
 # Released under the GPL v3 (http://www.gnu.org/licenses/gpl.html)
 
-import os
 import time
 import curses
 from curses.ascii import isprint
 from TorCtl import TorCtl
 
-from util import log, panel, uiTools
+from util import log, panel, sysTools, uiTools
 
 PRE_POPULATE_LOG = True               # attempts to retrieve events from log file if available
 
@@ -32,8 +31,8 @@
         w WARN      f DESCCHANGED     s STREAM          z STATUS_SERVER
         e ERR       g GUARD           t STREAM_BW       A All Events
                     k NEWCONSENSUS    u CLIENTS_SEEN    X No Events
-          DINWE Runlevel and higher severity            C TorCtl Events
-          12345 ARM runlevel and higher severity        U Unknown Events"""
+          DINWE runlevel and higher severity            C TorCtl Events
+          12345 arm runlevel and higher severity        U Unknown Events"""
 
 TOR_CTL_CLOSE_MSG = "Tor closed control connection. Exiting event thread."
 
@@ -91,7 +90,7 @@
   
   def __init__(self, stdscr, conn, loggedEvents):
     TorCtl.PostEventListener.__init__(self)
-    panel.Panel.__init__(self, stdscr, 0)
+    panel.Panel.__init__(self, stdscr, "log", 0)
     self.scroll = 0
     self.msgLog = []                      # tuples of (logText, color)
     self.isPaused = False
@@ -111,7 +110,6 @@
     # attempts to process events from log file
     if PRE_POPULATE_LOG:
       previousPauseState = self.isPaused
-      tailCall = None
       
       try:
         logFileLoc = None
@@ -129,11 +127,11 @@
           
           # trims log to last entries to deal with logs when they're in the GB or TB range
           # throws IOError if tail fails (falls to the catch-all later)
+          # TODO: now that this is using sysTools figure out if we can do away with the catch-all...
           limit = PRE_POPULATE_MIN_LIMIT if ("DEBUG" in self.loggedEvents or "INFO" in self.loggedEvents) else PRE_POPULATE_MAX_LIMIT
-          tailCall = os.popen("tail -n %i %s 2> /dev/null" % (limit, logFileLoc))
           
           # truncates to entries for this tor instance
-          lines = tailCall.readlines()
+          lines = sysTools.call("tail -n %i %s" % (limit, logFileLoc))
           instanceStart = 0
           for i in range(len(lines) - 1, -1, -1):
             if "opening log file" in lines[i]:
@@ -152,7 +150,6 @@
       finally:
         self.setPaused(previousPauseState)
         self.eventTimeOverwrite = None
-        if tailCall: tailCall.close()
   
   def handleKey(self, key):
     # scroll movement
@@ -218,11 +215,12 @@
   
   def ns_event(self, event):
     # NetworkStatus params: nickname, idhash, orhash, ip, orport (int), dirport (int), flags, idhex, bandwidth, updated (datetime)
-    msg = ""
-    for ns in event.nslist:
-      msg += ", %s (%s:%i)" % (ns.nickname, ns.ip, ns.orport)
-    if len(msg) > 1: msg = msg[2:]
-    self.registerEvent("NS", "Listed (%i): %s" % (len(event.nslist), msg), "blue")
+    if "NS" in self.loggedEvents:
+      msg = ""
+      for ns in event.nslist:
+        msg += ", %s (%s:%i)" % (ns.nickname, ns.ip, ns.orport)
+      if len(msg) > 1: msg = msg[2:]
+      self.registerEvent("NS", "Listed (%i): %s" % (len(event.nslist), msg), "blue")
   
   def new_consensus_event(self, event):
     if "NEWCONSENSUS" in self.loggedEvents:
@@ -296,7 +294,7 @@
     else:
       for msgLine in toAdd: self.msgLog.insert(0, (msgLine, color))
       if len(self.msgLog) > MAX_LOG_ENTRIES: del self.msgLog[MAX_LOG_ENTRIES:]
-      self.redraw()
+      self.redraw(True)
   
   def draw(self, subwindow, width, height):
     """
@@ -388,7 +386,7 @@
     if self.isPaused: self.pauseBuffer = []
     else:
       self.msgLog = (self.pauseBuffer + self.msgLog)[:MAX_LOG_ENTRIES]
-      if self.win: self.redraw() # hack to avoid redrawing during init
+      if self.win: self.redraw(True) # hack to avoid redrawing during init
   
   def getHeartbeat(self):
     """

Modified: arm/release/util/__init__.py
===================================================================
--- arm/release/util/__init__.py	2010-07-07 16:44:54 UTC (rev 22616)
+++ arm/release/util/__init__.py	2010-07-07 16:48:51 UTC (rev 22617)
@@ -4,5 +4,5 @@
 and safely working with curses (hiding some of the gory details).
 """
 
-__all__ = ["connections", "hostnames", "log", "panel", "uiTools"]
+__all__ = ["conf", "connections", "hostnames", "log", "panel", "sysTools", "torTools", "uiTools"]
 

Copied: arm/release/util/conf.py (from rev 22616, arm/trunk/util/conf.py)
===================================================================
--- arm/release/util/conf.py	                        (rev 0)
+++ arm/release/util/conf.py	2010-07-07 16:48:51 UTC (rev 22617)
@@ -0,0 +1,246 @@
+"""
+This provides handlers for specially formatted configuration files. Entries are
+expected to consist of simple key/value pairs, and anything after "#" is
+stripped as a comment. Excess whitespace is trimmed and empty lines are
+ignored. For instance:
+# This is my sample config
+
+user.name Galen
+user.password yabba1234 # here's an inline comment
+user.notes takes a fancy to pepperjack chese
+blankEntry.example
+
+would be loaded as four entries (the last one's value being an empty string).
+If a key's defined multiple times then the last instance of it is used.
+"""
+
+import os
+import threading
+
+import log
+
+CONFS = {}  # mapping of identifier to singleton instances of configs
+CONFIG = {"log.configEntryNotFound": None, "log.configEntryTypeError": log.INFO}
+
+def loadConfig(config):
+  config.update(CONFIG)
+
+def getConfig(handle):
+  """
+  Singleton constructor for configuration file instances. If a configuration
+  already exists for the handle then it's returned. Otherwise a fresh instance
+  is constructed.
+  
+  Arguments:
+    handle - unique identifier used to access this config instance
+  """
+  
+  if not handle in CONFS: CONFS[handle] = Config()
+  return CONFS[handle]
+
+class Config():
+  """
+  Handler for easily working with custom configurations, providing persistence
+  to and from files. All operations are thread safe.
+  
+  Parameters:
+    path        - location from which configurations are saved and loaded
+    contents    - mapping of current key/value pairs
+    rawContents - last read/written config (initialized to an empty string)
+  """
+  
+  def __init__(self):
+    """
+    Creates a new configuration instance.
+    """
+    
+    self.path = None        # path to the associated configuration file
+    self.contents = {}      # configuration key/value pairs
+    self.contentsLock = threading.RLock()
+    self.requestedKeys = set()
+    self.rawContents = []   # raw contents read from configuration file
+  
+  def getStr(self, key, default=None):
+    """
+    This provides the currently value associated with a given key. If no such
+    key exists then this provides the default.
+    
+    Arguments:
+      key     - config setting to be fetched
+      default - value provided if no such key exists
+    """
+    
+    self.contentsLock.acquire()
+    
+    if key in self.contents:
+      val = self.contents[key]
+      self.requestedKeys.add(key)
+    else:
+      msg = "config entry '%s' not found, defaulting to '%s'" % (key, str(default))
+      log.log(CONFIG["log.configEntryNotFound"], msg)
+      val = default
+    
+    self.contentsLock.release()
+    
+    return val
+  
+  def get(self, key, default=None, minValue=0, maxValue=None):
+    """
+    Fetches the given configuration, using the key and default value to hint
+    the type it should be. Recognized types are:
+    - boolean if default is a boolean (valid values are 'true' and 'false',
+      anything else provides the default)
+    - integer or float if default is a number (provides default if fails to
+      cast)
+    - logging runlevel if key starts with "log."
+    
+    Arguments:
+      key      - config setting to be fetched
+      default  - value provided if no such key exists
+      minValue - if set and default value is numeric then uses this constraint
+      maxValue - if set and default value is numeric then uses this constraint
+    """
+    
+    callDefault = log.runlevelToStr(default) if key.startswith("log.") else default
+    val = self.getStr(key, callDefault)
+    if val == default: return val
+    
+    if key.startswith("log."):
+      if val.lower() in ("none", "debug", "info", "notice", "warn", "err"):
+        val = log.strToRunlevel(val)
+      else:
+        msg = "config entry '%s' is expected to be a runlevel, defaulting to '%s'" % (key, callDefault)
+        log.log(CONFIG["log.configEntryTypeError"], msg)
+        val = default
+    elif isinstance(default, bool):
+      if val.lower() == "true": val = True
+      elif val.lower() == "false": val = False
+      else:
+        msg = "config entry '%s' is expected to be a boolean, defaulting to '%s'" % (key, str(default))
+        log.log(CONFIG["log.configEntryTypeError"], msg)
+        val = default
+    elif isinstance(default, int):
+      try:
+        val = int(val)
+        if minValue: val = max(val, minValue)
+        if maxValue: val = min(val, maxValue)
+      except ValueError:
+        msg = "config entry '%s' is expected to be an integer, defaulting to '%i'" % (key, default)
+        log.log(CONFIG["log.configEntryTypeError"], msg)
+        val = default
+    elif isinstance(default, float):
+      try:
+        val = float(val)
+        if minValue: val = max(val, minValue)
+        if maxValue: val = min(val, maxValue)
+      except ValueError:
+        msg = "config entry '%s' is expected to be a float, defaulting to '%f'" % (key, default)
+        log.log(CONFIG["log.configEntryTypeError"], msg)
+        val = default
+    
+    return val
+  
+  def update(self, confMappings):
+    """
+    Revises a set of key/value mappings to reflect the current configuration.
+    Undefined values are left with their current values.
+    
+    Arguments:
+      confMappings - configuration key/value mappings to be revised
+    """
+    
+    for entry in confMappings.keys():
+      confMappings[entry] = self.get(entry, confMappings[entry])
+  
+  def getKeys(self):
+    """
+    Provides all keys in the currently loaded configuration.
+    """
+    
+    return self.contents.keys()
+  
+  def getUnusedKeys(self):
+    """
+    Provides the set of keys that have never been requested.
+    """
+    
+    return set(self.getKeys()).difference(self.requestedKeys)
+  
+  def set(self, key, value):
+    """
+    Stores the given configuration value.
+    
+    Arguments:
+      key   - config key to be set
+      value - config value to be set
+    """
+    
+    self.contentsLock.acquire()
+    self.contents[key] = value
+    self.contentsLock.release()
+  
+  def clear(self):
+    """
+    Drops all current key/value mappings.
+    """
+    
+    self.contentsLock.acquire()
+    self.contents.clear()
+    self.contentsLock.release()
+  
+  def load(self):
+    """
+    Reads in the contents of the currently set configuration file (appending
+    any results to the current configuration). If the file's empty or doesn't
+    exist then this doesn't do anything.
+    
+    Other issues (like having an unset path or insufficient permissions) result
+    in an IOError.
+    """
+    
+    if not self.path: raise IOError("unable to load (config path undefined)")
+    
+    if os.path.exists(self.path):
+      configFile = open(self.path, "r")
+      self.rawContents = configFile.readlines()
+      configFile.close()
+      
+      self.contentsLock.acquire()
+      
+      for line in self.rawContents:
+        # strips any commenting or excess whitespace
+        commentStart = line.find("#")
+        if commentStart != -1: line = line[:commentStart]
+        line = line.strip()
+        
+        # parse the key/value pair
+        if line:
+          if " " in line:
+            key, value = line.split(" ", 1)
+            self.contents[key] = value
+          else:
+            self.contents[line] = "" # no value was provided
+      
+      self.contentsLock.release()
+  
+  def save(self, saveBackup=True):
+    """
+    Writes the contents of the current configuration. If a configuration file
+    already exists then merges as follows:
+    - comments and file contents not in this config are left unchanged
+    - lines with duplicate keys are stripped (first instance is kept)
+    - existing entries are overwritten with their new values, preserving the
+      positioning of in-line comments if able
+    - config entries not in the file are appended to the end in alphabetical
+      order
+    
+    If problems arise in writing (such as an unset path or insufficient
+    permissions) result in an IOError.
+    
+    Arguments:
+      saveBackup - if true and a file already exists then it's saved (with
+                   '.backup' appended to its filename)
+    """
+    
+    pass # TODO: implement when persistence is needed
+

Modified: arm/release/util/connections.py
===================================================================
--- arm/release/util/connections.py	2010-07-07 16:44:54 UTC (rev 22616)
+++ arm/release/util/connections.py	2010-07-07 16:48:51 UTC (rev 22617)
@@ -7,21 +7,26 @@
 - lsof      lsof -nPi | grep "<process>\s*<pid>.*(ESTABLISHED)"
 
 all queries dump its stderr (directing it to /dev/null). Unfortunately FreeBSD
-lacks support for the needed netstat flags, and has a completely different
+lacks support for the needed netstat flags and has a completely different
 program for 'ss', so this is quite likely to fail there.
 """
 
-import os
 import sys
 import time
 import threading
 
 import log
+import sysTools
 
 # enums for connection resolution utilities
 CMD_NETSTAT, CMD_SS, CMD_LSOF = range(1, 4)
 CMD_STR = {CMD_NETSTAT: "netstat", CMD_SS: "ss", CMD_LSOF: "lsof"}
 
+# If true this provides new instantiations for resolvers if the old one has
+# been stopped. This can make it difficult ensure all threads are terminated
+# when accessed concurrently.
+RECREATE_HALTED_RESOLVERS = False
+
 # formatted strings for the commands to be executed with the various resolvers
 # options are:
 # n = prevents dns lookups, p = include process, t = tcp only
@@ -29,26 +34,28 @@
 # tcp  0  0  127.0.0.1:9051  127.0.0.1:53308  ESTABLISHED 9912/tor
 # *note: bsd uses a different variant ('-t' => '-p tcp', but worse an
 #   equivilant -p doesn't exist so this can't function)
-RUN_NETSTAT = "netstat -npt 2> /dev/null | grep %s/%s 2> /dev/null"
+RUN_NETSTAT = "netstat -npt | grep %s/%s"
 
-# p = include process
+# n = numeric ports, p = include process
 # output:
 # ESTAB  0  0  127.0.0.1:9051  127.0.0.1:53308  users:(("tor",9912,20))
 # *note: under freebsd this command belongs to a spreadsheet program
-RUN_SS = "ss -p 2> /dev/null | grep \"\\\"%s\\\",%s\" 2> /dev/null"
+RUN_SS = "ss -np | grep \"\\\"%s\\\",%s\""
 
 # n = prevent dns lookups, P = show port numbers (not names), i = ip only
 # output:
 # tor  9912  atagar  20u  IPv4  33453  TCP 127.0.0.1:9051->127.0.0.1:53308
-RUN_LSOF = "lsof -nPi 2> /dev/null | grep \"%s\s*%s.*(ESTABLISHED)\" 2> /dev/null"
+RUN_LSOF = "lsof -nPi | grep \"%s\s*%s.*(ESTABLISHED)\""
 
 RESOLVERS = []                      # connection resolvers available via the singleton constructor
-RESOLVER_MIN_DEFAULT_LOOKUP = 5     # minimum seconds between lookups (unless overwritten)
-RESOLVER_SLEEP_INTERVAL = 1         # period to sleep when not resolving
 RESOLVER_FAILURE_TOLERANCE = 3      # number of subsequent failures before moving on to another resolver
 RESOLVER_SERIAL_FAILURE_MSG = "Querying connections with %s failed, trying %s"
 RESOLVER_FINAL_FAILURE_MSG = "All connection resolvers failed"
+CONFIG = {"queries.connections.minRate": 5, "log.connLookupFailed": log.INFO, "log.connLookupFailover": log.NOTICE, "log.connLookupAbandon": log.WARN, "log.connLookupRateGrowing": None}
 
+def loadConfig(config):
+  config.update(CONFIG)
+
 def getConnections(resolutionCmd, processName, processPid = ""):
   """
   Retrieves a list of the current connections for a given process, providing a
@@ -70,11 +77,10 @@
   elif resolutionCmd == CMD_SS: cmd = RUN_SS % (processName, processPid)
   else: cmd = RUN_LSOF % (processName, processPid)
   
-  resolutionCall = os.popen(cmd)
-  results = resolutionCall.readlines()
-  resolutionCall.close()
+  # raises an IOError if the command fails or isn't available
+  results = sysTools.call(cmd)
   
-  if not results: raise IOError("Unable to resolve connections using: %s" % cmd)
+  if not results: raise IOError("No results found using: %s" % cmd)
   
   # parses results for the resolution command
   conn = []
@@ -93,50 +99,54 @@
   
   return conn
 
-def getResolver(processName, processPid = "", newInit = True):
+def isResolverAlive(processName, processPid = ""):
   """
-  Singleton constructor for resolver instances. If a resolver already exists
-  for the process then it's returned. Otherwise one is created and started.
+  This provides true if a singleton resolver instance exists for the given
+  process/pid combination, false otherwise.
   
   Arguments:
-    processName - name of the process being resolved
-    processPid  - pid of the process being resolved, if undefined this matches
+    processName - name of the process being checked
+    processPid  - pid of the process being checked, if undefined this matches
                   against any resolver with the process name
-    newInit     - if a resolver isn't available then one's created if true,
-                  otherwise this returns None
   """
   
-  # check if one's already been created
   for resolver in RESOLVERS:
-    if resolver.processName == processName and (not processPid or resolver.processPid == processPid):
-      return resolver
+    if not resolver._halt and resolver.processName == processName and (not processPid or resolver.processPid == processPid):
+      return True
   
-  # make a new resolver
-  if newInit:
-    r = ConnectionResolver(processName, processPid)
-    r.start()
-    RESOLVERS.append(r)
-    return r
-  else: return None
+  return False
 
-def _isAvailable(command):
+def getResolver(processName, processPid = ""):
   """
-  Checks the current PATH to see if a command is available or not. This returns
-  True if an accessible executable by the name is found and False otherwise.
+  Singleton constructor for resolver instances. If a resolver already exists
+  for the process then it's returned. Otherwise one is created and started.
   
   Arguments:
-    command - name of the command for which to search
+    processName - name of the process being resolved
+    processPid  - pid of the process being resolved, if undefined this matches
+                  against any resolver with the process name
   """
   
-  for path in os.environ["PATH"].split(os.pathsep):
-    cmdPath = os.path.join(path, command)
-    if os.path.exists(cmdPath) and os.access(cmdPath, os.X_OK): return True
+  # check if one's already been created
+  haltedIndex = -1 # old instance of this resolver with the _halt flag set
+  for i in range(len(RESOLVERS)):
+    resolver = RESOLVERS[i]
+    if resolver.processName == processName and (not processPid or resolver.processPid == processPid):
+      if resolver._halt and RECREATE_HALTED_RESOLVERS: haltedIndex = i
+      else: return resolver
   
-  return False
+  # make a new resolver
+  r = ConnectionResolver(processName, processPid)
+  r.start()
   
+  # overwrites halted instance of this resolver if it exists, otherwise append
+  if haltedIndex == -1: RESOLVERS.append(r)
+  else: RESOLVERS[haltedIndex] = r
+  return r
+
 if __name__ == '__main__':
   # quick method for testing connection resolution
-  userInput = raw_input("Enter query (RESOLVER PROCESS_NAME [PID]: ").split()
+  userInput = raw_input("Enter query (<ss, netstat, lsof> PROCESS_NAME [PID]): ").split()
   
   # checks if there's enough arguments
   if len(userInput) == 0: sys.exit(0)
@@ -225,7 +235,7 @@
     self.processName = processName
     self.processPid = processPid
     self.resolveRate = resolveRate
-    self.defaultRate = RESOLVER_MIN_DEFAULT_LOOKUP
+    self.defaultRate = CONFIG["queries.connections.minRate"]
     self.lastLookup = -1
     self.overwriteResolver = None
     self.defaultResolver = CMD_NETSTAT
@@ -233,22 +243,33 @@
     # sets the default resolver to be the first found in the system's PATH
     # (left as netstat if none are found)
     for resolver in [CMD_NETSTAT, CMD_SS, CMD_LSOF]:
-      if _isAvailable(CMD_STR[resolver]):
-        self.defaultResolve = resolver
+      if sysTools.isAvailable(CMD_STR[resolver]):
+        self.defaultResolver = resolver
         break
     
     self._connections = []        # connection cache (latest results)
     self._isPaused = False
     self._halt = False            # terminates thread if true
+    self._cond = threading.Condition()  # used for pausing the thread
     self._subsiquentFailures = 0  # number of failed resolutions with the default in a row
     self._resolverBlacklist = []  # resolvers that have failed to resolve
+    
+    # Number of sequential times the threshold rate's been too low. This is to
+    # avoid having stray spikes up the rate.
+    self._rateThresholdBroken = 0
   
   def run(self):
     while not self._halt:
       minWait = self.resolveRate if self.resolveRate else self.defaultRate
+      timeSinceReset = time.time() - self.lastLookup
       
-      if self._isPaused or time.time() - self.lastLookup < minWait:
-        time.sleep(RESOLVER_SLEEP_INTERVAL)
+      if self._isPaused or timeSinceReset < minWait:
+        sleepTime = max(0.2, minWait - timeSinceReset)
+        
+        self._cond.acquire()
+        if not self._halt: self._cond.wait(sleepTime)
+        self._cond.release()
+        
         continue # done waiting, try again
       
       isDefault = self.overwriteResolver == None
@@ -264,13 +285,27 @@
         connResults = getConnections(resolver, self.processName, self.processPid)
         lookupTime = time.time() - resolveStart
         
-        log.log(log.DEBUG, "%s queried in %.4f seconds (%i results)" % (CMD_STR[resolver], lookupTime, len(connResults)))
+        self._connections = connResults
         
-        self._connections = connResults
-        self.defaultRate = max(5, 10 % lookupTime)
+        newMinDefaultRate = 100 * lookupTime
+        if self.defaultRate < newMinDefaultRate:
+          if self._rateThresholdBroken >= 3:
+            # adding extra to keep the rate from frequently changing
+            self.defaultRate = newMinDefaultRate + 0.5
+            
+            msg = "connection lookup time increasing to %0.1f seconds per call" % self.defaultRate
+            log.log(CONFIG["log.connLookupRateGrowing"], msg)
+          else: self._rateThresholdBroken += 1
+        else: self._rateThresholdBroken = 0
+        
         if isDefault: self._subsiquentFailures = 0
       except IOError, exc:
-        log.log(log.INFO, str(exc)) # notice that a single resolution has failed
+        # this logs in a couple of cases:
+        # - special failures noted by getConnections (most cases are already
+        # logged via sysTools)
+        # - note fail-overs for default resolution methods
+        if str(exc).startswith("No results found using:"):
+          log.log(CONFIG["log.connLookupFailed"], str(exc))
         
         if isDefault:
           self._subsiquentFailures += 1
@@ -288,11 +323,12 @@
                 break
             
             if newResolver:
-              # provide notice that failures have occured and resolver is changing
-              log.log(log.NOTICE, RESOLVER_SERIAL_FAILURE_MSG % (CMD_STR[resolver], CMD_STR[newResolver]))
+              # provide notice that failures have occurred and resolver is changing
+              msg = RESOLVER_SERIAL_FAILURE_MSG % (CMD_STR[resolver], CMD_STR[newResolver])
+              log.log(CONFIG["log.connLookupFailover"], msg)
             else:
               # exhausted all resolvers, give warning
-              log.log(log.WARN, RESOLVER_FINAL_FAILURE_MSG)
+              log.log(CONFIG["log.connLookupAbandon"], RESOLVER_FINAL_FAILURE_MSG)
             
             self.defaultResolver = newResolver
       finally:
@@ -325,8 +361,8 @@
     Halts further resolutions and terminates the thread.
     """
     
+    self._cond.acquire()
     self._halt = True
-    
-    # removes this from consideration among active singleton instances
-    if self in RESOLVERS: RESOLVERS.remove(self)
+    self._cond.notifyAll()
+    self._cond.release()
 

Modified: arm/release/util/hostnames.py
===================================================================
--- arm/release/util/hostnames.py	2010-07-07 16:44:54 UTC (rev 22616)
+++ arm/release/util/hostnames.py	2010-07-07 16:48:51 UTC (rev 22617)
@@ -25,7 +25,6 @@
 #     - When adding/removing from the cache (prevents workers from updating
 #       an outdated cache reference).
 
-import os
 import time
 import socket
 import threading
@@ -33,19 +32,25 @@
 import Queue
 import distutils.sysconfig
 
+import log
+import sysTools
+
 RESOLVER = None                       # hostname resolver (service is stopped if None)
 RESOLVER_LOCK = threading.RLock()     # regulates assignment to the RESOLVER
-RESOLVER_CACHE_SIZE = 700000          # threshold for when cached results are discarded
-RESOLVER_CACHE_TRIM_SIZE = 200000     # number of entries discarded when the limit's reached
-RESOLVER_THREAD_POOL_SIZE = 5         # upping to around 30 causes the program to intermittently seize
 RESOLVER_COUNTER = itertools.count()  # atomic counter, providing the age for new entries (for trimming)
 DNS_ERROR_CODES = ("1(FORMERR)", "2(SERVFAIL)", "3(NXDOMAIN)", "4(NOTIMP)", "5(REFUSED)", "6(YXDOMAIN)",
                    "7(YXRRSET)", "8(NXRRSET)", "9(NOTAUTH)", "10(NOTZONE)", "16(BADVERS)")
 
-# If true this allows for the use of socket.gethostbyaddr to resolve addresses
-# (this seems to be far slower, but would seem preferable if I'm wrong...).
-ALLOW_SOCKET_RESOLUTION = False
+CONFIG = {"queries.hostnames.poolSize": 5, "queries.hostnames.useSocketModule": False, "cache.hostnames.size": 700000, "cache.hostnames.trimSize": 200000, "log.hostnameCacheTrimmed": log.INFO}
 
+def loadConfig(config):
+  config.update(CONFIG)
+  
+  # ensures sane config values
+  CONFIG["queries.hostnames.poolSize"] = max(1, CONFIG["queries.hostnames.poolSize"])
+  CONFIG["cache.hostnames.size"] = max(100, CONFIG["cache.hostnames.size"])
+  CONFIG["cache.hostnames.trimSize"] = max(10, min(CONFIG["cache.hostnames.trimSize"], CONFIG["cache.hostnames.size"] / 2))
+
 def start():
   """
   Primes the service to start resolving addresses. Calling this explicitly is
@@ -74,7 +79,7 @@
     resolverRef, RESOLVER = RESOLVER, None
     
     # joins on its worker thread pool
-    resolverRef.halt = True
+    resolverRef.stop()
     for t in resolverRef.threadPool: t.join()
   RESOLVER_LOCK.release()
 
@@ -157,7 +162,7 @@
     # get cache entry, raising if an exception and returning if a hostname
     cacheRef = resolverRef.resolvedCache
     
-    if ipAddr in cacheRef.keys():
+    if ipAddr in cacheRef:
       entry = cacheRef[ipAddr][0]
       if suppressIOExc and type(entry) == IOError: return None
       elif isinstance(entry, Exception): raise entry
@@ -167,7 +172,7 @@
     # if resolver has cached an IOError then flush the entry (this defaults to
     # suppression since these error may be transient)
     cacheRef = resolverRef.resolvedCache
-    flush = ipAddr in cacheRef.keys() and type(cacheRef[ipAddr]) == IOError
+    flush = ipAddr in cacheRef and type(cacheRef[ipAddr]) == IOError
     
     try: return resolverRef.getHostname(ipAddr, timeout, flush)
     except IOError: return None
@@ -220,13 +225,8 @@
     ipAddr - ip address to be resolved
   """
   
-  hostCall = os.popen("host %s 2> /dev/null" % ipAddr)
-  hostname = hostCall.read()
-  hostCall.close()
+  hostname = sysTools.call("host %s" % ipAddr)[0].split()[-1:][0]
   
-  if hostname: hostname = hostname.split()[-1:][0]
-  else: raise IOError("lookup failed - is the host command available?")
-  
   if hostname == "reached":
     # got message: ";; connection timed out; no servers could be reached"
     raise IOError("lookup timed out")
@@ -255,15 +255,16 @@
     self.totalResolves = 0                # counter for the total number of addresses queried to be resolved
     self.isPaused = False                 # prevents further resolutions if true
     self.halt = False                     # if true, tells workers to stop
+    self.cond = threading.Condition()     # used for pausing threads
     
     # Determines if resolutions are made using os 'host' calls or python's
     # 'socket.gethostbyaddr'. The following checks if the system has the
     # gethostbyname_r function, which determines if python resolutions can be
     # done in parallel or not. If so, this is preferable.
     isSocketResolutionParallel = distutils.sysconfig.get_config_var("HAVE_GETHOSTBYNAME_R")
-    self.useSocketResolution = ALLOW_SOCKET_RESOLUTION and isSocketResolutionParallel
+    self.useSocketResolution = CONFIG["queries.hostnames.useSocketModule"] and isSocketResolutionParallel
     
-    for _ in range(RESOLVER_THREAD_POOL_SIZE):
+    for _ in range(CONFIG["queries.hostnames.poolSize"]):
       t = threading.Thread(target = self._workerLoop)
       t.setDaemon(True)
       t.start()
@@ -291,7 +292,7 @@
     # during this call)
     cacheRef = self.resolvedCache
     
-    if not flushCache and ipAddr in cacheRef.keys():
+    if not flushCache and ipAddr in cacheRef:
       # cached response is available - raise if an error, return if a hostname
       response = cacheRef[ipAddr][0]
       if isinstance(response, Exception): raise response
@@ -307,7 +308,7 @@
       startTime = time.time()
       
       while timeout == None or time.time() - startTime < timeout:
-        if ipAddr in cacheRef.keys():
+        if ipAddr in cacheRef:
           # address was resolved - raise if an error, return if a hostname
           response = cacheRef[ipAddr][0]
           if isinstance(response, Exception): raise response
@@ -316,6 +317,16 @@
     
     return None # timeout reached without resolution
   
+  def stop(self):
+    """
+    Halts further resolutions and terminates the thread.
+    """
+    
+    self.cond.acquire()
+    self.halt = True
+    self.cond.notifyAll()
+    self.cond.release()
+  
   def _workerLoop(self):
     """
     Simple producer-consumer loop followed by worker threads. This takes
@@ -326,13 +337,21 @@
     
     while not self.halt:
       # if resolver is paused then put a hold on further resolutions
-      while self.isPaused and not self.halt: time.sleep(0.25)
-      if self.halt: break
+      if self.isPaused:
+        self.cond.acquire()
+        if not self.halt: self.cond.wait(1)
+        self.cond.release()
+        continue
       
       # snags next available ip, timeout is because queue can't be woken up
       # when 'halt' is set
-      try: ipAddr = self.unresolvedQueue.get(True, 0.25)
-      except Queue.Empty: continue
+      try: ipAddr = self.unresolvedQueue.get_nowait()
+      except Queue.Empty:
+        # no elements ready, wait a little while and try again
+        self.cond.acquire()
+        if not self.halt: self.cond.wait(1)
+        self.cond.release()
+        continue
       if self.halt: break
       
       try:
@@ -345,16 +364,20 @@
       self.resolvedCache[ipAddr] = (result, RESOLVER_COUNTER.next())
       
       # trim cache if excessively large (clearing out oldest entries)
-      if len(self.resolvedCache) > RESOLVER_CACHE_SIZE:
+      if len(self.resolvedCache) > CONFIG["cache.hostnames.size"]:
         # Providing for concurrent, non-blocking calls require that entries are
         # never removed from the cache, so this creates a new, trimmed version
         # instead.
         
         # determines minimum age of entries to be kept
         currentCount = RESOLVER_COUNTER.next()
-        threshold = currentCount - (RESOLVER_CACHE_SIZE - RESOLVER_CACHE_TRIM_SIZE)
+        newCacheSize = CONFIG["cache.hostnames.size"] - CONFIG["cache.hostnames.trimSize"]
+        threshold = currentCount - newCacheSize
         newCache = {}
         
+        msg = "trimming hostname cache from %i entries to %i" % (len(self.resolvedCache), newCacheSize)
+        log.log(CONFIG["log.hostnameCacheTrimmed"], msg)
+        
         # checks age of each entry, adding to toDelete if too old
         for ipAddr, entry in self.resolvedCache.iteritems():
           if entry[1] >= threshold: newCache[ipAddr] = entry

Modified: arm/release/util/log.py
===================================================================
--- arm/release/util/log.py	2010-07-07 16:44:54 UTC (rev 22616)
+++ arm/release/util/log.py	2010-07-07 16:48:51 UTC (rev 22617)
@@ -14,9 +14,8 @@
 DEBUG, INFO, NOTICE, WARN, ERR = range(1, 6)
 RUNLEVEL_STR = {DEBUG: "DEBUG", INFO: "INFO", NOTICE: "NOTICE", WARN: "WARN", ERR: "ERR"}
 
-LOG_LIMIT = 1000            # threshold (per runlevel) at which entries are discarded
-LOG_TRIM_SIZE = 200         # number of entries discarded when the limit's reached
-LOG_LOCK = RLock()          # provides thread safety for logging operations
+# provides thread safety for logging operations
+LOG_LOCK = RLock()
 
 # chronologically ordered records of events for each runlevel, stored as tuples
 # consisting of: (time, message)
@@ -25,17 +24,57 @@
 # mapping of runlevels to the listeners interested in receiving events from it
 _listeners = dict([(level, []) for level in range(1, 6)])
 
+CONFIG = {"cache.armLog.size": 1000, "cache.armLog.trimSize": 200}
+
+def loadConfig(config):
+  config.update(CONFIG)
+  
+  # ensures sane config values
+  CONFIG["cache.armLog.size"] = max(10, CONFIG["cache.armLog.size"])
+  CONFIG["cache.armLog.trimSize"] = max(5, min(CONFIG["cache.armLog.trimSize"], CONFIG["cache.armLog.size"] / 2))
+
+def strToRunlevel(runlevelStr):
+  """
+  Converts runlevel strings ("DEBUG", "INFO", "NOTICE", etc) to their
+  corresponding enumeations. This isn't case sensitive and provides None if
+  unrecognized.
+  
+  Arguments:
+    runlevelStr - string to be converted to runlevel
+  """
+  
+  if not runlevelStr: return None
+  
+  runlevelStr = runlevelStr.upper()
+  for enum, level in RUNLEVEL_STR.items():
+    if level == runlevelStr: return enum
+  
+  return None
+
+def runlevelToStr(runlevelEnum):
+  """
+  Converts runlevel enumerations to corresponding string. If unrecognized then
+  this provides "NONE".
+  
+  Arguments:
+    runlevelEnum - enumeration to be converted to string
+  """
+  
+  if runlevelEnum in RUNLEVEL_STR: return RUNLEVEL_STR[runlevelEnum]
+  else: return "NONE"
+
 def log(level, msg, eventTime = None):
   """
   Registers an event, directing it to interested listeners and preserving it in
-  the backlog.
+  the backlog. If the level is None then this is a no-op.
   
   Arguments:
-    level     - runlevel coresponding to the message severity
+    level     - runlevel corresponding to the message severity
     msg       - string associated with the message
-    eventTime - unix time at which the event occured, current time if undefined
+    eventTime - unix time at which the event occurred, current time if undefined
   """
   
+  if not level: return
   if eventTime == None: eventTime = time.time()
   
   LOG_LOCK.acquire()
@@ -57,9 +96,9 @@
           eventBacklog.insert(i + 1, newEvent)
           break
     
-    # turncates backlog if too long
-    toDelete = len(eventBacklog) - LOG_LIMIT
-    if toDelete >= 0: del eventBacklog[: toDelete + LOG_TRIM_SIZE]
+    # truncates backlog if too long
+    toDelete = len(eventBacklog) - CONFIG["cache.armLog.size"]
+    if toDelete >= 0: del eventBacklog[: toDelete + CONFIG["cache.armLog.trimSize"]]
     
     # notifies listeners
     for callback in _listeners[level]:
@@ -69,7 +108,7 @@
 
 def addListener(level, callback):
   """
-  Directs future events to the given fallback function. The runlevels passed on
+  Directs future events to the given callback function. The runlevels passed on
   to listeners are provided as the corresponding strings ("DEBUG", "INFO",
   "NOTICE", etc), and times in POSIX (unix) time.
   

Modified: arm/release/util/panel.py
===================================================================
--- arm/release/util/panel.py	2010-07-07 16:44:54 UTC (rev 22616)
+++ arm/release/util/panel.py	2010-07-07 16:48:51 UTC (rev 22617)
@@ -5,20 +5,25 @@
 import curses
 from threading import RLock
 
-import uiTools
+import log, uiTools
 
 # global ui lock governing all panel instances (curses isn't thread save and 
 # concurrency bugs produce especially sinister glitches)
 CURSES_LOCK = RLock()
 
 # tags used by addfstr - this maps to functor/argument combinations since the
-# actual values (color attributes - grr...) might not yet be initialized
+# actual values (in the case of color attributes) might not yet be initialized
 def _noOp(arg): return arg
 FORMAT_TAGS = {"<b>": (_noOp, curses.A_BOLD),
                "<u>": (_noOp, curses.A_UNDERLINE),
                "<h>": (_noOp, curses.A_STANDOUT)}
-for colorLabel in uiTools.COLOR_LIST.keys(): FORMAT_TAGS["<%s>" % colorLabel] = (uiTools.getColor, colorLabel)
+for colorLabel in uiTools.COLOR_LIST: FORMAT_TAGS["<%s>" % colorLabel] = (uiTools.getColor, colorLabel)
 
+CONFIG = {"log.panelRecreated": log.DEBUG}
+
+def loadConfig(config):
+  config.update(CONFIG)
+
 class Panel():
   """
   Wrapper for curses subwindows. This hides most of the ugliness in common
@@ -26,19 +31,20 @@
     - locking when concurrently drawing to multiple windows
     - gracefully handle terminal resizing
     - clip text that falls outside the panel
-    - convenience methods for word wrap, inline formatting, etc
+    - convenience methods for word wrap, in-line formatting, etc
   
   This uses a design akin to Swing where panel instances provide their display
   implementation by overwriting the draw() method, and are redrawn with
   redraw().
   """
   
-  def __init__(self, parent, top, height=-1, width=-1):
+  def __init__(self, parent, name, top, height=-1, width=-1):
     """
     Creates a durable wrapper for a curses subwindow in the given parent.
     
     Arguments:
       parent - parent curses window
+      name   - identifier for the panel
       top    - positioning of top within parent
       height - maximum height of panel (uses all available space if -1)
       width  - maximum width of panel (uses all available space if -1)
@@ -49,6 +55,7 @@
     # might chose their height based on its parent's current width).
     
     self.parent = parent
+    self.panelName = name
     self.top = top
     self.height = height
     self.width = width
@@ -64,6 +71,13 @@
     
     self.maxY, self.maxX = -1, -1 # subwindow dimensions when last redrawn
   
+  def getName(self):
+    """
+    Provides panel's identifier.
+    """
+    
+    return self.name
+  
   def getParent(self):
     """
     Provides the parent used to create subwindows.
@@ -148,15 +162,18 @@
     """
     
     newHeight, newWidth = self.parent.getmaxyx()
+    setHeight, setWidth = self.getHeight(), self.getWidth()
     newHeight = max(0, newHeight - self.top)
-    if self.height != -1: newHeight = min(self.height, newHeight)
-    if self.width != -1: newWidth = min(self.width, newWidth)
+    if setHeight != -1: newHeight = min(newHeight, setHeight)
+    if setWidth != -1: newWidth = min(newWidth, setWidth)
     return (newHeight, newWidth)
   
   def draw(self, subwindow, width, height):
     """
     Draws display's content. This is meant to be overwritten by 
-    implementations and not called directly (use redraw() instead).
+    implementations and not called directly (use redraw() instead). The
+    dimensions provided are the drawable dimensions, which in terms of width is
+    a column less than the actual space.
     
     Arguments:
       sudwindow - panel's current subwindow instance, providing raw access to
@@ -167,15 +184,16 @@
     
     pass
   
-  def redraw(self, refresh=False, block=False):
+  def redraw(self, forceRedraw=False, block=False):
     """
-    Clears display and redraws its content.
+    Clears display and redraws its content. This can skip redrawing content if
+    able (ie, the subwindow's unchanged), instead just refreshing the display.
     
     Arguments:
-      refresh - skips redrawing content if able (ie, the subwindow's 
-                unchanged), instead just refreshing the display
-      block   - if drawing concurrently with other panels this determines if
-                the request is willing to wait its turn or should be abandoned
+      forceRedraw - forces the content to be cleared and redrawn if true
+      block       - if drawing concurrently with other panels this determines
+                    if the request is willing to wait its turn or should be
+                    abandoned
     """
     
     # if the panel's completely outside its parent then this is a no-op
@@ -195,13 +213,14 @@
     
     subwinMaxY, subwinMaxX = self.win.getmaxyx()
     if isNewWindow or subwinMaxY != self.maxY or subwinMaxX != self.maxX:
-      refresh = False
+      forceRedraw = True
     
     self.maxY, self.maxX = subwinMaxY, subwinMaxX
     if not CURSES_LOCK.acquire(block): return
     try:
-      self.win.erase() # clears any old contents
-      if not refresh: self.draw(self.win, self.maxX, self.maxY)
+      if forceRedraw:
+        self.win.erase() # clears any old contents
+        self.draw(self.win, self.maxX - 1, self.maxY)
       self.win.refresh()
     finally:
       CURSES_LOCK.release()
@@ -366,6 +385,7 @@
       manifests if the terminal's shrank then re-expanded. Displaced
       subwindows are never restored to their proper position, resulting in
       graphical glitches if we draw to them.
+    - The preferred size is smaller than the actual size (should shrink).
     
     This returns True if a new subwindow instance was created, False otherwise.
     """
@@ -379,6 +399,7 @@
       subwinMaxY, subwinMaxX = self.win.getmaxyx()
       recreate |= subwinMaxY < newHeight              # check for vertical growth
       recreate |= self.top > self.win.getparyx()[0]   # check for displacement
+      recreate |= subwinMaxX > newWidth or subwinMaxY > newHeight # shrinking
     
     # I'm not sure if recreating subwindows is some sort of memory leak but the
     # Python curses bindings seem to lack all of the following:
@@ -387,6 +408,11 @@
     # so this is the only option (besides removing subwindows entirely which 
     # would mean far more complicated code and no more selective refreshing)
     
-    if recreate: self.win = self.parent.subwin(newHeight, newWidth, self.top, 0)
+    if recreate:
+      self.win = self.parent.subwin(newHeight, newWidth, self.top, 0)
+      
+      # note: doing this log before setting win produces an infinite loop
+      msg = "recreating panel '%s' with the dimensions of %i/%i" % (self.panelName, newHeight, newWidth)
+      log.log(CONFIG["log.panelRecreated"], msg)
     return recreate
   

Copied: arm/release/util/sysTools.py (from rev 22616, arm/trunk/util/sysTools.py)
===================================================================
--- arm/release/util/sysTools.py	                        (rev 0)
+++ arm/release/util/sysTools.py	2010-07-07 16:48:51 UTC (rev 22617)
@@ -0,0 +1,172 @@
+"""
+Helper functions for working with the underlying system.
+"""
+
+import os
+import time
+import threading
+
+import log
+
+# mapping of commands to if they're available or not
+CMD_AVAILABLE_CACHE = {}
+
+# cached system call results, mapping the command issued to the (time, results) tuple
+CALL_CACHE = {}
+IS_FAILURES_CACHED = True           # caches both successful and failed results if true
+CALL_CACHE_LOCK = threading.RLock() # governs concurrent modifications of CALL_CACHE
+
+CONFIG = {"cache.sysCalls.size": 600, "log.sysCallMade": log.DEBUG, "log.sysCallCached": None, "log.sysCallFailed": log.INFO, "log.sysCallCacheGrowing": log.INFO}
+
+def loadConfig(config):
+  config.update(CONFIG)
+
+def isAvailable(command, cached=True):
+  """
+  Checks the current PATH to see if a command is available or not. If a full
+  call is provided then this just checks the first command (for instance
+  "ls -a | grep foo" is truncated to "ls"). This returns True if an accessible
+  executable by the name is found and False otherwise.
+  
+  Arguments:
+    command - command for which to search
+    cached  - this makes use of available cached results if true, otherwise
+              they're overwritten
+  """
+  
+  if " " in command: command = command.split(" ")[0]
+  
+  if cached and command in CMD_AVAILABLE_CACHE:
+    return CMD_AVAILABLE_CACHE[command]
+  else:
+    cmdExists = False
+    for path in os.environ["PATH"].split(os.pathsep):
+      cmdPath = os.path.join(path, command)
+      
+      if os.path.exists(cmdPath) and os.access(cmdPath, os.X_OK):
+        cmdExists = True
+        break
+    
+    CMD_AVAILABLE_CACHE[command] = cmdExists
+    return cmdExists
+
+def call(command, cacheAge=0, suppressExc=False, quiet=True):
+  """
+  Convenience function for performing system calls, providing:
+  - suppression of any writing to stdout, both directing stderr to /dev/null
+    and checking for the existence of commands before executing them
+  - logging of results (command issued, runtime, success/failure, etc)
+  - optional exception suppression and caching (the max age for cached results
+    is a minute)
+  
+  Arguments:
+    command     - command to be issued
+    cacheAge    - uses cached results rather than issuing a new request if last
+                  fetched within this number of seconds (if zero then all
+                  caching functionality is skipped)
+    suppressExc - provides None in cases of failure if True, otherwise IOErrors
+                  are raised
+    quiet       - if True, "2> /dev/null" is appended to all commands
+  """
+  
+  # caching functionality (fetching and trimming)
+  if cacheAge > 0:
+    global CALL_CACHE, CONFIG
+    
+    # keeps consistency that we never use entries over a minute old (these
+    # results are 'dirty' and might be trimmed at any time)
+    cacheAge = min(cacheAge, 60)
+    cacheSize = CONFIG["cache.sysCalls.size"]
+    
+    # if the cache is especially large then trim old entries
+    if len(CALL_CACHE) > cacheSize:
+      CALL_CACHE_LOCK.acquire()
+      
+      # checks that we haven't trimmed while waiting
+      if len(CALL_CACHE) > cacheSize:
+        # constructs a new cache with only entries less than a minute old
+        newCache, currentTime = {}, time.time()
+        
+        for cachedCommand, cachedResult in CALL_CACHE.items():
+          if currentTime - cachedResult[0] < 60:
+            newCache[cachedCommand] = cachedResult
+        
+        # if the cache is almost as big as the trim size then we risk doing this
+        # frequently, so grow it and log
+        if len(newCache) > (0.75 * cacheSize):
+          cacheSize = len(newCache) * 2
+          CONFIG["cache.sysCalls.size"] = cacheSize
+          
+          msg = "growing system call cache to %i entries" % cacheSize
+          log.log(CONFIG["log.sysCallCacheGrowing"], msg)
+        
+        CALL_CACHE = newCache
+      CALL_CACHE_LOCK.release()
+    
+    # checks if we can make use of cached results
+    if command in CALL_CACHE and time.time() - CALL_CACHE[command][0] < cacheAge:
+      cachedResults = CALL_CACHE[command][1]
+      cacheAge = time.time() - CALL_CACHE[command][0]
+      
+      if isinstance(cachedResults, IOError):
+        if IS_FAILURES_CACHED:
+          msg = "system call (cached failure): %s (age: %0.1f seconds, error: %s)" % (command, cacheAge, str(cachedResults))
+          log.log(CONFIG["log.sysCallCached"], msg)
+          
+          if suppressExc: return None
+          else: raise cachedResults
+        else:
+          # flag was toggled after a failure was cached - reissue call, ignoring the cache
+          return call(command, 0, suppressExc, quiet)
+      else:
+        msg = "system call (cached): %s (age: %0.1f seconds)" % (command, cacheAge)
+        log.log(CONFIG["log.sysCallCached"], msg)
+        
+        return cachedResults
+  
+  startTime = time.time()
+  commandComp = command.split("|")
+  commandCall, results, errorExc = None, None, None
+  
+  # preprocessing for the commands to prevent anything going to stdout
+  for i in range(len(commandComp)):
+    subcommand = commandComp[i].strip()
+    
+    if not isAvailable(subcommand): errorExc = IOError("'%s' is unavailable" % subcommand.split(" ")[0])
+    if quiet: commandComp[i] = "%s 2> /dev/null" % subcommand
+  
+  # processes the system call
+  if not errorExc:
+    try:
+      commandCall = os.popen(" | ".join(commandComp))
+      results = commandCall.readlines()
+    except IOError, exc:
+      errorExc = exc
+  
+  # make sure sys call is closed
+  if commandCall: commandCall.close()
+  
+  if errorExc:
+    # log failure and either provide None or re-raise exception
+    msg = "system call (failed): %s (error: %s)" % (command, str(errorExc))
+    log.log(CONFIG["log.sysCallFailed"], msg)
+    
+    if cacheAge > 0 and IS_FAILURES_CACHED:
+      CALL_CACHE_LOCK.acquire()
+      CALL_CACHE[command] = (time.time(), errorExc)
+      CALL_CACHE_LOCK.release()
+    
+    if suppressExc: return None
+    else: raise errorExc
+  else:
+    # log call information and if we're caching then save the results
+    msg = "system call: %s (runtime: %0.2f seconds)" % (command, time.time() - startTime)
+    log.log(CONFIG["log.sysCallMade"], msg)
+    
+    if cacheAge > 0:
+      CALL_CACHE_LOCK.acquire()
+      CALL_CACHE[command] = (time.time(), results)
+      CALL_CACHE_LOCK.release()
+    
+    return results
+

Copied: arm/release/util/torTools.py (from rev 22616, arm/trunk/util/torTools.py)
===================================================================
--- arm/release/util/torTools.py	                        (rev 0)
+++ arm/release/util/torTools.py	2010-07-07 16:48:51 UTC (rev 22617)
@@ -0,0 +1,877 @@
+"""
+Helper for working with an active tor process. This both provides a wrapper for
+accessing TorCtl and notifications of state changes to subscribers. To quickly
+fetch a TorCtl instance to experiment with use the following:
+
+>>> import util.torTools
+>>> conn = util.torTools.connect()
+>>> conn.get_info("version")["version"]
+'0.2.1.24'
+"""
+
+import os
+import time
+import socket
+import getpass
+import thread
+import threading
+
+from TorCtl import TorCtl
+
+import log
+import sysTools
+
+# enums for tor's controller state:
+# TOR_INIT - attached to a new controller or restart/sighup signal received
+# TOR_CLOSED - control port closed
+TOR_INIT, TOR_CLOSED = range(1, 3)
+
+# Message logged by default when a controller event type can't be set (message
+# has the event type inserted into it). This skips logging entirely if None.
+DEFAULT_FAILED_EVENT_ENTRY = (log.WARN, "Unsupported event type: %s")
+
+# TODO: check version when reattaching to controller and if version changes, flush?
+# Skips attempting to set events we've failed to set before. This avoids
+# logging duplicate warnings but can be problematic if controllers belonging
+# to multiple versions of tor are attached, making this unreflective of the
+# controller's capabilites. However, this is a pretty bizarre edge case.
+DROP_FAILED_EVENTS = True
+FAILED_EVENTS = set()
+
+CONTROLLER = None # singleton Controller instance
+INCORRECT_PASSWORD_MSG = "Provided passphrase was incorrect"
+
+# valid keys for the controller's getInfo cache
+CACHE_ARGS = ("nsEntry", "descEntry", "bwRate", "bwBurst", "bwObserved",
+              "bwMeasured", "flags", "fingerprint", "pid")
+
+UNKNOWN = "UNKNOWN" # value used by cached information if undefined
+CONFIG = {"log.torGetInfo": log.DEBUG, "log.torGetConf": log.DEBUG}
+
+def loadConfig(config):
+  config.update(CONFIG)
+
+def makeCtlConn(controlAddr="127.0.0.1", controlPort=9051):
+  """
+  Opens a socket to the tor controller and queries its authentication type,
+  raising an IOError if problems occur. The result of this function is a tuple
+  of the TorCtl connection and the authentication type, where the later is one
+  of the following:
+  "NONE"          - no authentication required
+  "PASSWORD"      - requires authentication via a hashed password
+  "COOKIE=<FILE>" - requires the specified authentication cookie
+  
+  Arguments:
+    controlAddr - ip address belonging to the controller
+    controlPort - port belonging to the controller
+  """
+  
+  try:
+    s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+    s.connect((controlAddr, controlPort))
+    conn = TorCtl.Connection(s)
+  except socket.error, exc:
+    if "Connection refused" in exc.args:
+      # most common case - tor control port isn't available
+      raise IOError("Connection refused. Is the ControlPort enabled?")
+    else: raise IOError("Failed to establish socket: %s" % exc)
+  
+  # check PROTOCOLINFO for authentication type
+  try:
+    authInfo = conn.sendAndRecv("PROTOCOLINFO\r\n")[1][1]
+  except TorCtl.ErrorReply, exc:
+    raise IOError("Unable to query PROTOCOLINFO for authentication type: %s" % exc)
+  
+  if authInfo.startswith("AUTH METHODS=NULL"):
+    # no authentication required
+    return (conn, "NONE")
+  elif authInfo.startswith("AUTH METHODS=HASHEDPASSWORD"):
+    # password authentication
+    return (conn, "PASSWORD")
+  elif authInfo.startswith("AUTH METHODS=COOKIE"):
+    # cookie authentication, parses authentication cookie path
+    start = authInfo.find("COOKIEFILE=\"") + 12
+    end = authInfo.find("\"", start)
+    return (conn, "COOKIE=%s" % authInfo[start:end])
+
+def initCtlConn(conn, authType="NONE", authVal=None):
+  """
+  Authenticates to a tor connection. The authentication type can be any of the
+  following strings:
+  NONE, PASSWORD, COOKIE
+  
+  if the authentication type is anything other than NONE then either a
+  passphrase or path to an authentication cookie is expected. If an issue
+  arises this raises either of the following:
+    - IOError for failures in reading an authentication cookie
+    - TorCtl.ErrorReply for authentication failures
+  
+  Argument:
+    conn     - unauthenticated TorCtl connection
+    authType - type of authentication method to use
+    authVal  - passphrase or path to authentication cookie
+  """
+  
+  # validates input
+  if authType not in ("NONE", "PASSWORD", "COOKIE"):
+    # authentication type unrecognized (possibly a new addition to the controlSpec?)
+    raise TorCtl.ErrorReply("Unrecognized authentication type: %s" % authType)
+  elif authType != "NONE" and authVal == None:
+    typeLabel = "passphrase" if authType == "PASSWORD" else "cookie"
+    raise TorCtl.ErrorReply("Unable to authenticate: no %s provided" % typeLabel)
+  
+  authCookie = None
+  try:
+    if authType == "NONE": conn.authenticate("")
+    elif authType == "PASSWORD": conn.authenticate(authVal)
+    else:
+      authCookie = open(authVal, "r")
+      conn.authenticate_cookie(authCookie)
+      authCookie.close()
+  except TorCtl.ErrorReply, exc:
+    if authCookie: authCookie.close()
+    issue = str(exc)
+    
+    # simplifies message if the wrong credentials were provided (common mistake)
+    if issue.startswith("515 Authentication failed: "):
+      if issue[27:].startswith("Password did not match"):
+        issue = "password incorrect"
+      elif issue[27:] == "Wrong length on authentication cookie.":
+        issue = "cookie value incorrect"
+    
+    raise TorCtl.ErrorReply("Unable to authenticate: %s" % issue)
+  except IOError, exc:
+    if authCookie: authCookie.close()
+    issue = None
+    
+    # cleaner message for common errors
+    if str(exc).startswith("[Errno 13] Permission denied"): issue = "permission denied"
+    elif str(exc).startswith("[Errno 2] No such file or directory"): issue = "file doesn't exist"
+    
+    # if problem's recognized give concise message, otherwise print exception string
+    if issue: raise IOError("Failed to read authentication cookie (%s): %s" % (issue, authVal))
+    else: raise IOError("Failed to read authentication cookie: %s" % exc)
+
+def connect(controlAddr="127.0.0.1", controlPort=9051, passphrase=None):
+  """
+  Convenience method for quickly getting a TorCtl connection. This is very
+  handy for debugging or CLI setup, handling setup and prompting for a password
+  if necessary (if either none is provided as input or it fails). If any issues
+  arise this prints a description of the problem and returns None.
+  
+  Arguments:
+    controlAddr - ip address belonging to the controller
+    controlPort - port belonging to the controller
+    passphrase  - authentication passphrase (if defined this is used rather
+                  than prompting the user)
+  """
+  
+  try:
+    conn, authType = makeCtlConn(controlAddr, controlPort)
+    authValue = None
+    
+    if authType == "PASSWORD":
+      # password authentication, promting for the password if it wasn't provided
+      if passphrase: authValue = passphrase
+      else:
+        try: authValue = getpass.getpass()
+        except KeyboardInterrupt: return None
+    elif authType.startswith("COOKIE"):
+      authType, authValue = authType.split("=", 1)
+    
+    initCtlConn(conn, authType, authValue)
+    return conn
+  except Exception, exc:
+    if passphrase and str(exc) == "Unable to authenticate: password incorrect":
+      # provide a warning that the provided password didn't work, then try
+      # again prompting for the user to enter it
+      print INCORRECT_PASSWORD_MSG
+      return connect(controlAddr, controlPort)
+    else:
+      print exc
+      return None
+
+def getPid(controlPort=9051):
+  """
+  Attempts to determine the process id for a running tor process, using the
+  following:
+  1. "pidof tor"
+  2. "netstat -npl | grep 127.0.0.1:%s" % <tor control port>
+  3. "ps -o pid -C tor"
+  
+  If pidof or ps provide multiple tor instances then their results are
+  discarded (since only netstat can differentiate using the control port). This
+  provides None if either no running process exists or it can't be determined.
+  
+  Arguments:
+    controlPort - control port of the tor process if multiple exist
+  """
+  
+  # attempts to resolve using pidof, failing if:
+  # - tor's running under a different name
+  # - there's multiple instances of tor
+  try:
+    results = sysTools.call("pidof tor")
+    if len(results) == 1 and len(results[0].split()) == 1:
+      pid = results[0].strip()
+      if pid.isdigit(): return pid
+  except IOError: pass
+  
+  # attempts to resolve using netstat, failing if:
+  # - tor's being run as a different user due to permissions
+  try:
+    results = sysTools.call("netstat -npl | grep 127.0.0.1:%i" % controlPort)
+    
+    if len(results) == 1:
+      results = results[0].split()[6] # process field (ex. "7184/tor")
+      pid = results[:results.find("/")]
+      if pid.isdigit(): return pid
+  except IOError: pass
+  
+  # attempts to resolve using ps, failing if:
+  # - tor's running under a different name
+  # - there's multiple instances of tor
+  try:
+    results = sysTools.call("ps -o pid -C tor")
+    if len(results) == 2:
+      pid = results[1].strip()
+      if pid.isdigit(): return pid
+  except IOError: pass
+  
+  return None
+
+def getConn():
+  """
+  Singleton constructor for a Controller. Be aware that this start
+  uninitialized, needing a TorCtl instance before it's fully functional.
+  """
+  
+  global CONTROLLER
+  if CONTROLLER == None: CONTROLLER = Controller()
+  return CONTROLLER
+
+class Controller(TorCtl.PostEventListener):
+  """
+  TorCtl wrapper providing convenience functions, listener functionality for
+  tor's state, and the capability for controller connections to be restarted
+  if closed.
+  """
+  
+  def __init__(self):
+    TorCtl.PostEventListener.__init__(self)
+    self.conn = None                    # None if uninitialized or controller's been closed
+    self.connLock = threading.RLock()
+    self.eventListeners = []            # instances listening for tor controller events
+    self.statusListeners = []           # callback functions for tor's state changes
+    self.controllerEvents = {}          # mapping of successfully set controller events to their failure level/msg
+    self._isReset = False               # internal flag for tracking resets
+    self._status = TOR_CLOSED           # current status of the attached control port
+    self._statusTime = 0                # unix time-stamp for the duration of the status
+    
+    # cached getInfo parameters (None if unset or possibly changed)
+    self._cachedParam = dict([(arg, "") for arg in CACHE_ARGS])
+  
+  def init(self, conn=None):
+    """
+    Uses the given TorCtl instance for future operations, notifying listeners
+    about the change.
+    
+    Arguments:
+      conn - TorCtl instance to be used, if None then a new instance is fetched
+             via the connect function
+    """
+    
+    if conn == None:
+      conn = connect()
+      
+      if conn == None: raise ValueError("Unable to initialize TorCtl instance.")
+    
+    if conn.is_live() and conn != self.conn:
+      self.connLock.acquire()
+      
+      if self.conn: self.close() # shut down current connection
+      self.conn = conn
+      self.conn.add_event_listener(self)
+      for listener in self.eventListeners: self.conn.add_event_listener(listener)
+      
+      # sets the events listened for by the new controller (incompatible events
+      # are dropped with a logged warning)
+      self.setControllerEvents(self.controllerEvents)
+      
+      self.connLock.release()
+      
+      self._status = TOR_INIT
+      self._statusTime = time.time()
+      
+      # notifies listeners that a new controller is available
+      thread.start_new_thread(self._notifyStatusListeners, (TOR_INIT,))
+  
+  def close(self):
+    """
+    Closes the current TorCtl instance and notifies listeners.
+    """
+    
+    self.connLock.acquire()
+    if self.conn:
+      self.conn.close()
+      self.conn = None
+      self.connLock.release()
+      
+      self._status = TOR_CLOSED
+      self._statusTime = time.time()
+      
+      # notifies listeners that the controller's been shut down
+      thread.start_new_thread(self._notifyStatusListeners, (TOR_CLOSED,))
+    else: self.connLock.release()
+  
+  def isAlive(self):
+    """
+    Returns True if this has been initialized with a working TorCtl instance,
+    False otherwise.
+    """
+    
+    self.connLock.acquire()
+    
+    result = False
+    if self.conn:
+      if self.conn.is_live(): result = True
+      else: self.close()
+    
+    self.connLock.release()
+    return result
+  
+  def getTorCtl(self):
+    """
+    Provides the current TorCtl connection. If unset or closed then this
+    returns None.
+    """
+    
+    self.connLock.acquire()
+    result = None
+    if self.isAlive(): result = self.conn
+    self.connLock.release()
+    
+    return result
+  
+  def getInfo(self, param, default = None, suppressExc = True):
+    """
+    Queries the control port for the given GETINFO option, providing the
+    default if the response fails for any reason (error response, control port
+    closed, initiated, etc).
+    
+    Arguments:
+      param       - GETINFO option to be queried
+      default     - result if the query fails and exception's suppressed
+      suppressExc - suppresses lookup errors (returning the default) if true,
+                    otherwise this raises the original exception
+    """
+    
+    self.connLock.acquire()
+    
+    startTime = time.time()
+    result, raisedExc = default, None
+    if self.isAlive():
+      try:
+        result = self.conn.get_info(param)[param]
+      except (socket.error, TorCtl.ErrorReply, TorCtl.TorCtlClosed), exc:
+        if type(exc) == TorCtl.TorCtlClosed: self.close()
+        raisedExc = exc
+    
+    msg = "tor control call: GETINFO %s (runtime: %0.4f)" % (param, time.time() - startTime)
+    log.log(CONFIG["log.torGetInfo"], msg)
+    
+    self.connLock.release()
+    
+    if not suppressExc and raisedExc: raise raisedExc
+    else: return result
+  
+  def getOption(self, param, default = None, multiple = False, suppressExc = True):
+    """
+    Queries the control port for the given configuration option, providing the
+    default if the response fails for any reason. If multiple values exist then
+    this arbitrarily returns the first unless the multiple flag is set.
+    
+    Arguments:
+      param       - configuration option to be queried
+      default     - result if the query fails and exception's suppressed
+      multiple    - provides a list of results if true, otherwise this just
+                    returns the first value
+      suppressExc - suppresses lookup errors (returning the default) if true,
+                    otherwise this raises the original exception
+    """
+    
+    self.connLock.acquire()
+    
+    startTime = time.time()
+    result, raisedExc = [], None
+    if self.isAlive():
+      try:
+        if multiple:
+          for key, value in self.conn.get_option(param):
+            if value != None: result.append(value)
+        else: result = self.conn.get_option(param)[0][1]
+      except (socket.error, TorCtl.ErrorReply, TorCtl.TorCtlClosed), exc:
+        if type(exc) == TorCtl.TorCtlClosed: self.close()
+        result, raisedExc = default, exc
+    
+    msg = "tor control call: GETCONF %s (runtime: %0.4f)" % (param, time.time() - startTime)
+    log.log(CONFIG["log.torGetConf"], msg)
+    
+    self.connLock.release()
+    
+    if not suppressExc and raisedExc: raise raisedExc
+    else: return result
+  
+  def getMyNetworkStatus(self, default = None):
+    """
+    Provides the network status entry for this relay if available. This is
+    occasionally expanded so results may vary depending on tor's version. For
+    0.2.2.13 they contained entries like the following:
+    
+    r caerSidi p1aag7VwarGxqctS7/fS0y5FU+s 9On1TRGCEpljszPpJR1hKqlzaY8 2010-05-26 09:26:06 76.104.132.98 9001 0
+    s Fast HSDir Named Running Stable Valid
+    w Bandwidth=25300
+    p reject 1-65535
+    
+    Arguments:
+      default - result if the query fails
+    """
+    
+    return self._getRelayAttr("nsEntry", default)
+  
+  def getMyDescriptor(self, default = None):
+    """
+    Provides the descriptor entry for this relay if available.
+    
+    Arguments:
+      default - result if the query fails
+    """
+    
+    return self._getRelayAttr("descEntry", default)
+  
+  def getMyBandwidthRate(self, default = None):
+    """
+    Provides the effective relaying bandwidth rate of this relay.
+    
+    Arguments:
+      default - result if the query fails
+    """
+    
+    return self._getRelayAttr("bwRate", default)
+  
+  def getMyBandwidthBurst(self, default = None):
+    """
+    Provides the effective bandwidth burst rate of this relay.
+    
+    Arguments:
+      default - result if the query fails
+    """
+    
+    return self._getRelayAttr("bwBurst", default)
+  
+  def getMyBandwidthObserved(self, default = None):
+    """
+    Provides the relay's current observed bandwidth (the throughput determined
+    from historical measurements on the client side). This is used in the
+    heuristic used for path selection if the measured bandwidth is undefined.
+    This is fetched from the descriptors and hence will get stale if
+    descriptors aren't periodically updated.
+    
+    Arguments:
+      default - result if the query fails
+    """
+    
+    return self._getRelayAttr("bwObserved", default)
+  
+  def getMyBandwidthMeasured(self, default = None):
+    """
+    Provides the relay's current measured bandwidth (the throughput as noted by
+    the directory authorities and used by clients for relay selection). This is
+    undefined if not in the consensus or with older versions of Tor. Depending
+    on the circumstances this can be from a variety of things (observed,
+    measured, weighted measured, etc) as described by:
+    https://trac.torproject.org/projects/tor/ticket/1566
+    
+    Arguments:
+      default - result if the query fails
+    """
+    
+    return self._getRelayAttr("bwMeasured", default)
+  
+  def getMyFingerprint(self, default = None):
+    """
+    Provides the fingerprint for this relay.
+    
+    Arguments:
+      default - result if the query fails
+    """
+    
+    return self._getRelayAttr("fingerprint", default, False)
+  
+  def getMyFlags(self, default = None):
+    """
+    Provides the flags held by this relay.
+    
+    Arguments:
+      default - result if the query fails or this relay isn't a part of the consensus yet
+    """
+    
+    return self._getRelayAttr("flags", default)
+  
+  def getMyPid(self):
+    """
+    Provides the pid of the attached tor process (None if no controller exists
+    or this can't be determined).
+    """
+    
+    return self._getRelayAttr("pid", None)
+  
+  def getStatus(self):
+    """
+    Provides a tuple consisting of the control port's current status and unix
+    time-stamp for when it became this way (zero if no status has yet to be
+    set).
+    """
+    
+    return (self._status, self._statusTime)
+  
+  def addEventListener(self, listener):
+    """
+    Directs further tor controller events to callback functions of the
+    listener. If a new control connection is initialized then this listener is
+    reattached.
+    
+    Arguments:
+      listener - TorCtl.PostEventListener instance listening for events
+    """
+    
+    self.connLock.acquire()
+    self.eventListeners.append(listener)
+    if self.isAlive(): self.conn.add_event_listener(listener)
+    self.connLock.release()
+  
+  def addStatusListener(self, callback):
+    """
+    Directs further events related to tor's controller status to the callback
+    function.
+    
+    Arguments:
+      callback - functor that'll accept the events, expected to be of the form:
+                 myFunction(controller, eventType)
+    """
+    
+    self.statusListeners.append(callback)
+  
+  def removeStatusListener(self, callback):
+    """
+    Stops listener from being notified of further events. This returns true if a
+    listener's removed, false otherwise.
+    
+    Arguments:
+      callback - functor to be removed
+    """
+    
+    if callback in self.statusListeners:
+      self.statusListeners.remove(callback)
+      return True
+    else: return False
+  
+  def setControllerEvents(self, eventsToMsg):
+    """
+    Sets the events being provided via any associated tor controller, logging
+    messages for event types that aren't supported (possibly due to version
+    issues). This remembers the successfully set events and tries to apply them
+    to any controllers attached later too (again logging and dropping
+    unsuccessful event types). This returns the listing of event types that
+    were successfully set. If no controller is available or events can't be set
+    then this is a no-op.
+    
+    Arguments:
+      eventsToMsg - mapping of event types to a tuple of the (runlevel, msg) it
+                    should log in case of failure (uses DEFAULT_FAILED_EVENT_ENTRY
+                    if mapped to None)
+    """
+    
+    self.connLock.acquire()
+    
+    returnVal = []
+    if self.isAlive():
+      events = set(eventsToMsg.keys())
+      unavailableEvents = set()
+      
+      # removes anything we've already failed to set
+      if DROP_FAILED_EVENTS:
+        unavailableEvents.update(events.intersection(FAILED_EVENTS))
+        events.difference_update(FAILED_EVENTS)
+      
+      # initial check for event availability
+      validEvents = self.getInfo("events/names")
+      
+      if validEvents:
+        validEvents = set(validEvents.split())
+        unavailableEvents.update(events.difference(validEvents))
+        events.intersection_update(validEvents)
+      
+      # attempt to set events
+      isEventsSet, isAbandoned = False, False
+      
+      while not isEventsSet and not isAbandoned:
+        try:
+          self.conn.set_events(list(events))
+          isEventsSet = True
+        except TorCtl.ErrorReply, exc:
+          msg = str(exc)
+          
+          if "Unrecognized event" in msg:
+            # figure out type of event we failed to listen for
+            start = msg.find("event \"") + 7
+            end = msg.rfind("\"")
+            failedType = msg[start:end]
+            
+            unavailableEvents.add(failedType)
+            events.discard(failedType)
+          else:
+            # unexpected error, abandon attempt
+            isAbandoned = True
+        except TorCtl.TorCtlClosed:
+          self.close()
+          isAbandoned = True
+      
+      FAILED_EVENTS.update(unavailableEvents)
+      if not isAbandoned:
+        # removes failed events and logs warnings
+        for eventType in unavailableEvents:
+          if eventsToMsg[eventType]:
+            lvl, msg = eventsToMsg[eventType]
+            log.log(lvl, msg)
+          elif DEFAULT_FAILED_EVENT_ENTRY:
+            lvl, msg = DEFAULT_FAILED_EVENT_ENTRY
+            log.log(lvl, msg % eventType)
+          
+          del eventsToMsg[eventType]
+        
+        self.controllerEvents = eventsToMsg
+        returnVal = eventsToMsg.keys()
+    
+    self.connLock.release()
+    return returnVal
+  
+  def reload(self, issueSighup = False):
+    """
+    This resets tor (sending a RELOAD signal to the control port) causing tor's
+    internal state to be reset and the torrc reloaded. This can either be done
+    by...
+      - the controller via a RELOAD signal (default and suggested)
+          conn.send_signal("RELOAD")
+      - system reload signal (hup)
+          pkill -sighup tor
+    
+    The later isn't really useful unless there's some reason the RELOAD signal
+    won't do the trick. Both methods raise an IOError in case of failure.
+    
+    Arguments:
+      issueSighup - issues a sighup rather than a controller RELOAD signal
+    """
+    
+    self.connLock.acquire()
+    
+    raisedException = None
+    if self.isAlive():
+      if not issueSighup:
+        try:
+          self.conn.send_signal("RELOAD")
+        except Exception, exc:
+          # new torrc parameters caused an error (tor's likely shut down)
+          # BUG: this doesn't work - torrc errors still cause TorCtl to crash... :(
+          # http://bugs.noreply.org/flyspray/index.php?do=details&id=1329
+          raisedException = IOError(str(exc))
+      else:
+        try:
+          # Redirects stderr to stdout so we can check error status (output
+          # should be empty if successful). Example error:
+          # pkill: 5592 - Operation not permitted
+          #
+          # note that this may provide multiple errors, even if successful,
+          # hence this:
+          #   - only provide an error if Tor fails to log a sighup
+          #   - provide the error message associated with the tor pid (others
+          #     would be a red herring)
+          if not sysTools.isAvailable("pkill"):
+            raise IOError("pkill command is unavailable")
+          
+          self._isReset = False
+          pkillCall = os.popen("pkill -sighup ^tor$ 2> /dev/stdout")
+          pkillOutput = pkillCall.readlines()
+          pkillCall.close()
+          
+          # Give the sighupTracker a moment to detect the sighup signal. This
+          # is, of course, a possible concurrency bug. However I'm not sure
+          # of a better method for blocking on this...
+          waitStart = time.time()
+          while time.time() - waitStart < 1:
+            time.sleep(0.1)
+            if self._isReset: break
+          
+          if not self._isReset:
+            errorLine, torPid = "", self.getMyPid()
+            if torPid:
+              for line in pkillOutput:
+                if line.startswith("pkill: %s - " % torPid):
+                  errorLine = line
+                  break
+            
+            if errorLine: raise IOError(" ".join(errorLine.split()[3:]))
+            else: raise IOError("failed silently")
+        except IOError, exc:
+          raisedException = exc
+    
+    self.connLock.release()
+    
+    if raisedException: raise raisedException
+  
+  def msg_event(self, event):
+    """
+    Listens for reload signal (hup), which is either produced by:
+    causing the torrc and internal state to be reset.
+    """
+    
+    if event.level == "NOTICE" and event.msg.startswith("Received reload signal (hup)"):
+      self._isReset = True
+      
+      self._status = TOR_INIT
+      self._statusTime = time.time()
+      
+      thread.start_new_thread(self._notifyStatusListeners, (TOR_INIT,))
+  
+  def ns_event(self, event):
+    myFingerprint = self.getMyFingerprint()
+    if myFingerprint:
+      for ns in event.nslist:
+        if ns.idhex == myFingerprint:
+          self._cachedParam["nsEntry"] = None
+          self._cachedParam["flags"] = None
+          self._cachedParam["bwMeasured"] = None
+          return
+    else:
+      self._cachedParam["nsEntry"] = None
+      self._cachedParam["flags"] = None
+      self._cachedParam["bwMeasured"] = None
+  
+  def new_consensus_event(self, event):
+    self._cachedParam["nsEntry"] = None
+    self._cachedParam["flags"] = None
+    self._cachedParam["bwMeasured"] = None
+  
+  def new_desc_event(self, event):
+    myFingerprint = self.getMyFingerprint()
+    if not myFingerprint or myFingerprint in event.idlist:
+      self._cachedParam["descEntry"] = None
+      self._cachedParam["bwObserved"] = None
+  
+  def _getRelayAttr(self, key, default, cacheUndefined = True):
+    """
+    Provides information associated with this relay, using the cached value if
+    available and otherwise looking it up.
+    
+    Arguments:
+      key            - parameter being queried (from CACHE_ARGS)
+      default        - value to be returned if undefined
+      cacheUndefined - caches when values are undefined, avoiding further
+                       lookups if true
+    """
+    
+    currentVal = self._cachedParam[key]
+    if currentVal:
+      if currentVal == UNKNOWN: return default
+      else: return currentVal
+    
+    self.connLock.acquire()
+    
+    currentVal, result = self._cachedParam[key], None
+    if not currentVal and self.isAlive():
+      # still unset - fetch value
+      if key in ("nsEntry", "descEntry"):
+        myFingerprint = self.getMyFingerprint()
+        
+        if myFingerprint:
+          queryType = "ns" if key == "nsEntry" else "desc"
+          queryResult = self.getInfo("%s/id/%s" % (queryType, myFingerprint))
+          if queryResult: result = queryResult.split("\n")
+      elif key == "bwRate":
+        # effective relayed bandwidth is the minimum of BandwidthRate,
+        # MaxAdvertisedBandwidth, and RelayBandwidthRate (if set)
+        effectiveRate = int(self.getOption("BandwidthRate"))
+        
+        relayRate = self.getOption("RelayBandwidthRate")
+        if relayRate and relayRate != "0":
+          effectiveRate = min(effectiveRate, int(relayRate))
+        
+        maxAdvertised = self.getOption("MaxAdvertisedBandwidth")
+        if maxAdvertised: effectiveRate = min(effectiveRate, int(maxAdvertised))
+        
+        result = effectiveRate
+      elif key == "bwBurst":
+        # effective burst (same for BandwidthBurst and RelayBandwidthBurst)
+        effectiveBurst = int(self.getOption("BandwidthBurst"))
+        
+        relayBurst = self.getOption("RelayBandwidthBurst")
+        if relayBurst and relayBurst != "0":
+          effectiveBurst = min(effectiveBurst, int(relayBurst))
+        
+        result = effectiveBurst
+      elif key == "bwObserved":
+        for line in self.getMyDescriptor([]):
+          if line.startswith("bandwidth"):
+            # line should look something like:
+            # bandwidth 40960 102400 47284
+            comp = line.split()
+            
+            if len(comp) == 4 and comp[-1].isdigit():
+              result = int(comp[-1])
+              break
+      elif key == "bwMeasured":
+        # TODO: Currently there's no client side indication of what type of
+        # measurement was used. Include this in results if it's ever available.
+        
+        for line in self.getMyNetworkStatus([]):
+          if line.startswith("w Bandwidth="):
+            bwValue = line[12:]
+            if bwValue.isdigit(): result = int(bwValue)
+            break
+      elif key == "fingerprint":
+        # fingerprints are kept until sighup if set (most likely not even a
+        # setconf can change it since it's in the data directory)
+        result = self.getInfo("fingerprint")
+      elif key == "flags":
+        for line in self.getMyNetworkStatus([]):
+          if line.startswith("s "):
+            result = line[2:].split()
+            break
+      elif key == "pid":
+        result = getPid(int(self.getOption("ControlPort", 9051)))
+      
+      # cache value
+      if result: self._cachedParam[key] = result
+      elif cacheUndefined: self._cachedParam[key] = UNKNOWN
+    elif currentVal == UNKNOWN: result = currentVal
+    
+    self.connLock.release()
+    
+    if result: return result
+    else: return default
+  
+  def _notifyStatusListeners(self, eventType):
+    """
+    Sends a notice to all current listeners that a given change in tor's
+    controller status has occurred.
+    
+    Arguments:
+      eventType - enum representing tor's new status
+    """
+    
+    # resets cached getInfo parameters
+    self._cachedParam = dict([(arg, "") for arg in CACHE_ARGS])
+    
+    for callback in self.statusListeners:
+      callback(self, eventType)
+

Modified: arm/release/util/uiTools.py
===================================================================
--- arm/release/util/uiTools.py	2010-07-07 16:44:54 UTC (rev 22616)
+++ arm/release/util/uiTools.py	2010-07-07 16:48:51 UTC (rev 22617)
@@ -5,8 +5,11 @@
 - unit conversion for labels
 """
 
+import sys
 import curses
 
+import log
+
 # colors curses can handle
 COLOR_LIST = {"red": curses.COLOR_RED,        "green": curses.COLOR_GREEN,
               "yellow": curses.COLOR_YELLOW,  "blue": curses.COLOR_BLUE,
@@ -16,7 +19,7 @@
 # mappings for getColor() - this uses the default terminal color scheme if
 # color support is unavailable
 COLOR_ATTR_INITIALIZED = False
-COLOR_ATTR = dict([(color, 0) for color in COLOR_LIST.keys()])
+COLOR_ATTR = dict([(color, 0) for color in COLOR_LIST])
 
 # value tuples for label conversions (bytes / seconds, short label, long label)
 SIZE_UNITS = [(1125899906842624.0, " PB", " Petabyte"), (1099511627776.0, " TB", " Terabyte"),
@@ -25,6 +28,11 @@
 TIME_UNITS = [(86400.0, "d", " day"),                   (3600.0, "h", " hour"),
               (60.0, "m", " minute"),                   (1.0, "s", " second")]
 
+CONFIG = {"features.colorInterface": True, "log.cursesColorSupport": log.INFO}
+
+def loadConfig(config):
+  config.update(CONFIG)
+
 def getColor(color):
   """
   Provides attribute corresponding to a given text color. Supported colors
@@ -42,11 +50,64 @@
   if not COLOR_ATTR_INITIALIZED: _initColors()
   return COLOR_ATTR[color]
 
+def cropStr(msg, size, minWordLen = 4, addEllipse = True):
+  """
+  Provides the msg constrained to the given length, truncating on word breaks.
+  If the last words is long this truncates mid-word with an ellipse. If there
+  isn't room for even a truncated single word (or one word plus the ellipse if
+  including those) then this provides an empty string. Examples:
+  
+  cropStr("This is a looooong message", 17)
+  "This is a looo..."
+  
+  cropStr("This is a looooong message", 12)
+  "This is a..."
+  
+  cropStr("This is a looooong message", 3)
+  ""
+  
+  Arguments:
+    msg        - source text
+    size       - room available for text
+    minWordLen - minimum characters before which a word is dropped, requires
+                 whole word if -1
+    addEllipse - includes an ellipse when truncating if true (dropped if size
+                 size is 
+  """
+  
+  if minWordLen < 0: minWordLen = sys.maxint
+  
+  if len(msg) <= size: return msg
+  else:
+    msgWords = msg.split(" ")
+    msgWords.reverse()
+    
+    returnWords = []
+    sizeLeft = size - 3 if addEllipse else size
+    
+    # checks that there's room for at least one word
+    if min(minWordLen, len(msgWords[-1])) > sizeLeft: return ""
+    
+    while sizeLeft > 0:
+      nextWord = msgWords.pop()
+      
+      if len(nextWord) <= sizeLeft:
+        returnWords.append(nextWord)
+        sizeLeft -= (len(nextWord) + 1)
+      elif minWordLen <= sizeLeft:
+        returnWords.append(nextWord[:sizeLeft])
+        sizeLeft = 0
+      else: sizeLeft = 0
+    
+    returnMsg = " ".join(returnWords)
+    if addEllipse: returnMsg += "..."
+    return returnMsg
+
 def getSizeLabel(bytes, decimal = 0, isLong = False):
   """
   Converts byte count into label in its most significant units, for instance
   7500 bytes would return "7 KB". If the isLong option is used this expands
-  unit labels to be the properly pluralised full word (for instance 'Kilobytes'
+  unit labels to be the properly pluralized full word (for instance 'Kilobytes'
   rather than 'KB'). Units go up through PB.
   
   Example Usage:
@@ -69,7 +130,7 @@
   
   This defaults to presenting single character labels, but if the isLong option
   is used this expands labels to be the full word (space included and properly
-  pluralised). For instance, "4h" would be "4 hours" and "1m" would become
+  pluralized). For instance, "4h" would be "4 hours" and "1m" would become
   "1 minute".
   
   Example Usage:
@@ -135,7 +196,7 @@
       else:
         # unfortunately the %f formatting has no method of rounding down, so
         # reducing value to only concern the digits that are visible - note
-        # that this doesn't work with miniscule values (starts breaking down at
+        # that this doesn't work with minuscule values (starts breaking down at
         # around eight decimal places) or edge cases when working with powers
         # of two
         croppedCount = count - (count % (countPerUnit / (10 ** decimal)))
@@ -157,18 +218,23 @@
   
   global COLOR_ATTR_INITIALIZED
   if not COLOR_ATTR_INITIALIZED:
+    COLOR_ATTR_INITIALIZED = True
+    if not CONFIG["features.colorInterface"]: return
+    
     try: hasColorSupport = curses.has_colors()
     except curses.error: return # initscr hasn't been called yet
     
     # initializes color mappings if color support is available
-    COLOR_ATTR_INITIALIZED = True
     if hasColorSupport:
       colorpair = 0
+      log.log(CONFIG["log.cursesColorSupport"], "Terminal color support detected and enabled")
       
-      for colorName in COLOR_LIST.keys():
+      for colorName in COLOR_LIST:
         fgColor = COLOR_LIST[colorName]
         bgColor = -1 # allows for default (possibly transparent) background
         colorpair += 1
         curses.init_pair(colorpair, fgColor, bgColor)
         COLOR_ATTR[colorName] = curses.color_pair(colorpair)
+    else:
+      log.log(CONFIG["log.cursesColorSupport"], "Terminal color support unavailable")
 



More information about the tor-commits mailing list