tor-commits
Threads by month
- ----- 2025 -----
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
September 2020
- 17 participants
- 1822 discussions

[translation/communitytpo-contentspot] https://gitweb.torproject.org/translation.git/commit/?h=communitytpo-contentspot
by translation@torproject.org 01 Sep '20
by translation@torproject.org 01 Sep '20
01 Sep '20
commit 08177a210516f3fe902b11b9eabaab1737b05546
Author: Translation commit bot <translation(a)torproject.org>
Date: Tue Sep 1 21:45:11 2020 +0000
https://gitweb.torproject.org/translation.git/commit/?h=communitytpo-conten…
---
contents+ar.po | 20 ++++++++++++++++++--
1 file changed, 18 insertions(+), 2 deletions(-)
diff --git a/contents+ar.po b/contents+ar.po
index 611c7f5f0c..268b82bfb8 100644
--- a/contents+ar.po
+++ b/contents+ar.po
@@ -3993,6 +3993,8 @@ msgid ""
"most critical are Tor Browser, the Tor Browser User Manual, and our Support "
"Portal:"
msgstr ""
+"على الرغم من أننا نقدر مساهمتك في أي من المشاريع المذكورة أعلاه ، إلا أن "
+"أهمها هو متصفح Tor ودليل مستخدم متصفح Tor وبوابة الدعم الخاصة بنا:"
#: https//community.torproject.org/localization/pick-a-project/
#: (content/localization/pick-a-project/contents+en.lrpage.body)
@@ -4001,6 +4003,9 @@ msgid ""
"can see the [Tor Browser total strings translated per "
"language](https://torpat.ch/locales) to see where help is needed."
msgstr ""
+"* يُترجم متصفح Tor في العديد من موارد Transifex المختلفة ، ولكن يمكنك رؤية "
+"[مجموع سلاسل متصفح Tor مترجمة لكل لغة] (https://torpat.ch/locales) لمعرفة "
+"أين تحتاج إلى المساعدة."
#: https//community.torproject.org/localization/pick-a-project/
#: (content/localization/pick-a-project/contents+en.lrpage.body)
@@ -4011,6 +4016,11 @@ msgid ""
"[translate](https://www.transifex.com/otf/tor-project-support-community-"
"portal/tbmanual-contentspot/)."
msgstr ""
+"* يعد دليل مستخدم متصفح Tor مصدرًا مفيدًا جدًا للمستخدمين الجدد الذين لا "
+"يتحدثون الإنجليزية ، راجع [إحصائيات ترجمة دليل مستخدم متصفح Tor] "
+"(https://torpat.ch/manual-locales) أو [ترجم] (https: / "
+"/www.transifex.com/otf/tor-project-support-community-portal/tbmanual-"
+"contentspot/)."
#: https//community.torproject.org/localization/pick-a-project/
#: (content/localization/pick-a-project/contents+en.lrpage.body)
@@ -4020,11 +4030,15 @@ msgid ""
"or [translate](https://www.transifex.com/otf/tor-project-support-community-"
"portal/support-portal/)"
msgstr ""
+"* تعد بوابة الدعم أيضًا مصدرًا قيمًا لجميع مستخدمي Tor ، راجع [إحصائيات "
+"ترجمة بوابة دعم Tor] (https://torpat.ch/support-locales) أو [ترجم] "
+"(https://www.transifex.com/ otf / tor-project-support-community-portal / "
+"support-portal /)"
#: https//community.torproject.org/localization/translation-problem/
#: (content/localization/translation-problem/contents+en.lrpage.title)
msgid "Report a problem with a translation"
-msgstr ""
+msgstr "الإبلاغ عن مشكلة في الترجمة"
#: https//community.torproject.org/localization/translation-problem/
#: (content/localization/translation-problem/contents+en.lrpage.subtitle)
@@ -4032,11 +4046,13 @@ msgid ""
"Sometimes the translations of apps are not working correctly. Here you can "
"learn to fix it."
msgstr ""
+"في بعض الأحيان لا تعمل ترجمات التطبيقات بشكل صحيح. هنا يمكنك تعلم كيفية "
+"إصلاحه."
#: https//community.torproject.org/localization/translation-problem/
#: (content/localization/translation-problem/contents+en.lrpage.body)
msgid "### Reporting an error with a translation"
-msgstr ""
+msgstr "### الإبلاغ عن خطأ في الترجمة"
#: https//community.torproject.org/localization/translation-problem/
#: (content/localization/translation-problem/contents+en.lrpage.body)
1
0
commit 4164c7a6203fae6671075dfba69461340dd05bc5
Author: Damian Johnson <atagar(a)torproject.org>
Date: Tue Sep 1 13:57:23 2020 -0700
Replace all IOErrors with OSErrors
PEP 3151 deprecated IOError...
https://www.python.org/dev/peps/pep-3151/#confusing-set-of-os-related-excep…
Python 3.3 turned IOError into an OSError alias, so this commit shouldn't
impact our users...
>>> raise OSError('boom')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OSError: boom
>>> raise IOError('boom')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OSError: boom
---
cache_fallback_directories.py | 2 +-
cache_manual.py | 4 +-
docs/_static/example/check_digests.py | 2 +-
docs/_static/example/manual_config_options.py | 2 +-
docs/_static/example/reading_twitter.py | 4 +-
docs/change_log.rst | 1 +
stem/__init__.py | 2 +-
stem/connection.py | 2 +-
stem/descriptor/__init__.py | 10 ++---
stem/descriptor/bandwidth_file.py | 2 +-
stem/descriptor/collector.py | 8 ++--
stem/descriptor/extrainfo_descriptor.py | 2 +-
stem/descriptor/hidden_service.py | 2 +-
stem/descriptor/microdescriptor.py | 2 +-
stem/descriptor/networkstatus.py | 6 +--
stem/descriptor/router_status_entry.py | 2 +-
stem/descriptor/server_descriptor.py | 2 +-
stem/descriptor/tordnsel.py | 2 +-
stem/directory.py | 18 ++++----
stem/interpreter/__init__.py | 2 +-
stem/manual.py | 26 +++++------
stem/util/conf.py | 8 ++--
stem/util/connection.py | 12 +++---
stem/util/proc.py | 62 +++++++++++++--------------
stem/util/system.py | 16 +++----
stem/version.py | 12 +++---
test/integ/util/connection.py | 2 +-
test/integ/version.py | 4 +-
test/unit/descriptor/collector.py | 8 ++--
test/unit/directory/fallback.py | 6 +--
test/unit/manual.py | 12 +++---
test/unit/util/connection.py | 48 ++++++++++-----------
test/unit/util/proc.py | 4 +-
test/unit/util/system.py | 4 +-
test/unit/version.py | 4 +-
35 files changed, 153 insertions(+), 152 deletions(-)
diff --git a/cache_fallback_directories.py b/cache_fallback_directories.py
index 7f712683..8fe425d1 100755
--- a/cache_fallback_directories.py
+++ b/cache_fallback_directories.py
@@ -26,7 +26,7 @@ if __name__ == '__main__':
try:
stem_commit = stem.util.system.call('git rev-parse HEAD')[0]
- except IOError as exc:
+ except OSError as exc:
print("Unable to determine stem's current commit: %s" % exc)
sys.exit(1)
diff --git a/cache_manual.py b/cache_manual.py
index 803197f1..4ddb843f 100755
--- a/cache_manual.py
+++ b/cache_manual.py
@@ -26,7 +26,7 @@ if __name__ == '__main__':
try:
stem_commit = stem.util.system.call('git rev-parse HEAD')[0]
- except IOError as exc:
+ except OSError as exc:
print("Unable to determine stem's current commit: %s" % exc)
sys.exit(1)
@@ -39,7 +39,7 @@ if __name__ == '__main__':
db_schema = cached_manual.schema
except stem.manual.SchemaMismatch as exc:
cached_manual, db_schema = None, exc.database_schema
- except IOError:
+ except OSError:
cached_manual, db_schema = None, None # local copy has been deleted
if db_schema != stem.manual.SCHEMA_VERSION:
diff --git a/docs/_static/example/check_digests.py b/docs/_static/example/check_digests.py
index 2be3c368..69f509cf 100644
--- a/docs/_static/example/check_digests.py
+++ b/docs/_static/example/check_digests.py
@@ -19,7 +19,7 @@ def download_descriptors(fingerprint):
router_status_entries = filter(lambda desc: desc.fingerprint == fingerprint, conensus_query.run())
if len(router_status_entries) != 1:
- raise IOError("Unable to find relay '%s' in the consensus" % fingerprint)
+ raise OSError("Unable to find relay '%s' in the consensus" % fingerprint)
return (
router_status_entries[0],
diff --git a/docs/_static/example/manual_config_options.py b/docs/_static/example/manual_config_options.py
index 964ff523..4a503579 100644
--- a/docs/_static/example/manual_config_options.py
+++ b/docs/_static/example/manual_config_options.py
@@ -5,7 +5,7 @@ try:
print("Downloading tor's manual information, please wait...")
manual = Manual.from_remote()
print(" done\n")
-except IOError as exc:
+except OSError as exc:
print(" unsuccessful (%s), using information provided with stem\n" % exc)
manual = Manual.from_cache() # fall back to our bundled manual information
diff --git a/docs/_static/example/reading_twitter.py b/docs/_static/example/reading_twitter.py
index 7f9094b3..5709e1f4 100644
--- a/docs/_static/example/reading_twitter.py
+++ b/docs/_static/example/reading_twitter.py
@@ -65,7 +65,7 @@ def poll_twitter_feed(user_id, tweet_count):
try:
api_response = urllib2.urlopen(api_request).read()
except:
- raise IOError("Unable to reach %s" % TWITTER_API_URL)
+ raise OSError("Unable to reach %s" % TWITTER_API_URL)
return json.loads(api_response)
@@ -81,7 +81,7 @@ try:
print("%i. %s" % (index + 1, tweet["created_at"]))
print(tweet["text"])
print("")
-except IOError as exc:
+except OSError as exc:
print(exc)
finally:
tor_process.kill() # stops tor
diff --git a/docs/change_log.rst b/docs/change_log.rst
index f5a96d7b..bbd13ef5 100644
--- a/docs/change_log.rst
+++ b/docs/change_log.rst
@@ -54,6 +54,7 @@ The following are only available within Stem's `git repository
* Migrated to `asyncio <https://docs.python.org/3/library/asyncio.html>`_. Stem can now be used by `both synchronous and asynchronous applications <https://blog.atagar.com/july2020/>`_.
* Installation has migrated from distutils to setuptools.
* Added the 'reset_timeouts' argument to :func:`~stem.control.Controller.drop_guards` (:ticket:`73`)
+ * Replace all IOErrors with OSErrors. Python 3.3 changed IOError into an `OSError alias <https://docs.python.org/3/library/exceptions.html#OSError>`_ to `deprecate it <https://www.python.org/dev/peps/pep-3151/#confusing-set-of-os-related-excep…>`_.
* **Controller**
diff --git a/stem/__init__.py b/stem/__init__.py
index 228ec7be..e8782115 100644
--- a/stem/__init__.py
+++ b/stem/__init__.py
@@ -731,7 +731,7 @@ class SocketClosed(SocketError):
'Control socket was closed before completing the message.'
-class DownloadFailed(IOError):
+class DownloadFailed(OSError):
"""
Inability to download a resource. Python's urllib module raises
a wide variety of undocumented exceptions (urllib.request.URLError,
diff --git a/stem/connection.py b/stem/connection.py
index 0dbbcfbf..f5f92464 100644
--- a/stem/connection.py
+++ b/stem/connection.py
@@ -1157,7 +1157,7 @@ def _read_cookie(cookie_path: str, is_safecookie: bool) -> bytes:
try:
with open(cookie_path, 'rb', 0) as f:
return f.read()
- except IOError as exc:
+ except OSError as exc:
exc_msg = "Authentication failed: unable to read '%s' (%s)" % (cookie_path, exc)
raise UnreadableCookieFile(exc_msg, cookie_path, is_safecookie)
diff --git a/stem/descriptor/__init__.py b/stem/descriptor/__init__.py
index df947cf7..58a88d55 100644
--- a/stem/descriptor/__init__.py
+++ b/stem/descriptor/__init__.py
@@ -238,7 +238,7 @@ class _Compression(object):
:raises:
If unable to decompress this provide...
- * **IOError** if content isn't compressed with this
+ * **OSError** if content isn't compressed with this
* **ImportError** if this method if decompression is unavalable
"""
@@ -253,7 +253,7 @@ class _Compression(object):
try:
return self._decompression_func(self._module, content)
except Exception as exc:
- raise IOError('Failed to decompress as %s: %s' % (self, exc))
+ raise OSError('Failed to decompress as %s: %s' % (self, exc))
def __str__(self) -> str:
return self._name
@@ -370,7 +370,7 @@ def parse_file(descriptor_file: Union[str, BinaryIO, tarfile.TarFile, IO[bytes]]
:raises:
* **ValueError** if the contents is malformed and validate is True
* **TypeError** if we can't match the contents of the file to a descriptor type
- * **IOError** if unable to read from the descriptor_file
+ * **OSError** if unable to read from the descriptor_file
"""
# Delegate to a helper if this is a path or tarfile.
@@ -392,7 +392,7 @@ def parse_file(descriptor_file: Union[str, BinaryIO, tarfile.TarFile, IO[bytes]]
return
if not descriptor_file.seekable(): # type: ignore
- raise IOError(UNSEEKABLE_MSG)
+ raise OSError(UNSEEKABLE_MSG)
# The tor descriptor specifications do not provide a reliable method for
# identifying a descriptor file's type and version so we need to guess
@@ -860,7 +860,7 @@ class Descriptor(object):
:raises:
* **ValueError** if the contents is malformed and validate is True
* **TypeError** if we can't match the contents of the file to a descriptor type
- * **IOError** if unable to read from the descriptor_file
+ * **OSError** if unable to read from the descriptor_file
"""
if 'descriptor_type' not in kwargs and cls.TYPE_ANNOTATION_NAME is not None:
diff --git a/stem/descriptor/bandwidth_file.py b/stem/descriptor/bandwidth_file.py
index 2b1ed5d3..f449665d 100644
--- a/stem/descriptor/bandwidth_file.py
+++ b/stem/descriptor/bandwidth_file.py
@@ -175,7 +175,7 @@ def _parse_file(descriptor_file: BinaryIO, validate: bool = False, **kwargs: Any
:raises:
* **ValueError** if the contents is malformed and validate is **True**
- * **IOError** if the file can't be read
+ * **OSError** if the file can't be read
"""
if kwargs:
diff --git a/stem/descriptor/collector.py b/stem/descriptor/collector.py
index 8c05a832..892ccc30 100644
--- a/stem/descriptor/collector.py
+++ b/stem/descriptor/collector.py
@@ -307,7 +307,7 @@ class File(object):
:raises:
* :class:`~stem.DownloadFailed` if the download fails
- * **IOError** if a mismatching file exists and **overwrite** is **False**
+ * **OSError** if a mismatching file exists and **overwrite** is **False**
"""
filename = self.path.split('/')[-1]
@@ -332,7 +332,7 @@ class File(object):
if expected_hash == actual_hash:
return path # nothing to do, we already have the file
elif not overwrite:
- raise IOError("%s already exists but mismatches CollecTor's checksum (expected: %s, actual: %s)" % (path, expected_hash, actual_hash))
+ raise OSError("%s already exists but mismatches CollecTor's checksum (expected: %s, actual: %s)" % (path, expected_hash, actual_hash))
response = stem.util.connection.download(COLLECTOR_URL + self.path, timeout, retries)
@@ -624,7 +624,7 @@ class CollecTor(object):
If unable to retrieve the index this provide...
* **ValueError** if json is malformed
- * **IOError** if unable to decompress
+ * **OSError** if unable to decompress
* :class:`~stem.DownloadFailed` if the download fails
"""
@@ -664,7 +664,7 @@ class CollecTor(object):
If unable to retrieve the index this provide...
* **ValueError** if json is malformed
- * **IOError** if unable to decompress
+ * **OSError** if unable to decompress
* :class:`~stem.DownloadFailed` if the download fails
"""
diff --git a/stem/descriptor/extrainfo_descriptor.py b/stem/descriptor/extrainfo_descriptor.py
index 86458cdf..8d9cbf30 100644
--- a/stem/descriptor/extrainfo_descriptor.py
+++ b/stem/descriptor/extrainfo_descriptor.py
@@ -182,7 +182,7 @@ def _parse_file(descriptor_file: BinaryIO, is_bridge = False, validate = False,
:raises:
* **ValueError** if the contents is malformed and validate is **True**
- * **IOError** if the file can't be read
+ * **OSError** if the file can't be read
"""
if kwargs:
diff --git a/stem/descriptor/hidden_service.py b/stem/descriptor/hidden_service.py
index 49737f9d..dc86b382 100644
--- a/stem/descriptor/hidden_service.py
+++ b/stem/descriptor/hidden_service.py
@@ -451,7 +451,7 @@ def _parse_file(descriptor_file: BinaryIO, desc_type: Optional[Type['stem.descri
:raises:
* **ValueError** if the contents is malformed and validate is **True**
- * **IOError** if the file can't be read
+ * **OSError** if the file can't be read
"""
if desc_type is None:
diff --git a/stem/descriptor/microdescriptor.py b/stem/descriptor/microdescriptor.py
index 0b015ebb..ea48fce8 100644
--- a/stem/descriptor/microdescriptor.py
+++ b/stem/descriptor/microdescriptor.py
@@ -108,7 +108,7 @@ def _parse_file(descriptor_file: BinaryIO, validate: bool = False, **kwargs: Any
:raises:
* **ValueError** if the contents is malformed and validate is True
- * **IOError** if the file can't be read
+ * **OSError** if the file can't be read
"""
if kwargs:
diff --git a/stem/descriptor/networkstatus.py b/stem/descriptor/networkstatus.py
index f216e427..e8cf90eb 100644
--- a/stem/descriptor/networkstatus.py
+++ b/stem/descriptor/networkstatus.py
@@ -317,7 +317,7 @@ def _parse_file(document_file: BinaryIO, document_type: Optional[Type] = None, v
:raises:
* **ValueError** if the document_version is unrecognized or the contents is
malformed and validate is **True**
- * **IOError** if the file can't be read
+ * **OSError** if the file can't be read
"""
# we can't properly default this since NetworkStatusDocumentV3 isn't defined yet
@@ -390,7 +390,7 @@ def _parse_file_key_certs(certificate_file: BinaryIO, validate: bool = False) ->
:raises:
* **ValueError** if the key certificates are invalid and validate is **True**
- * **IOError** if the file can't be read
+ * **OSError** if the file can't be read
"""
while True:
@@ -419,7 +419,7 @@ def _parse_file_detached_sigs(detached_signature_file: BinaryIO, validate: bool
:raises:
* **ValueError** if the detached signatures are invalid and validate is **True**
- * **IOError** if the file can't be read
+ * **OSError** if the file can't be read
"""
while True:
diff --git a/stem/descriptor/router_status_entry.py b/stem/descriptor/router_status_entry.py
index ecfcb7fd..aa94a703 100644
--- a/stem/descriptor/router_status_entry.py
+++ b/stem/descriptor/router_status_entry.py
@@ -72,7 +72,7 @@ def _parse_file(document_file: BinaryIO, validate: bool, entry_class: Type['stem
:raises:
* **ValueError** if the contents is malformed and validate is **True**
- * **IOError** if the file can't be read
+ * **OSError** if the file can't be read
"""
if start_position:
diff --git a/stem/descriptor/server_descriptor.py b/stem/descriptor/server_descriptor.py
index 6e6a5369..e49688e1 100644
--- a/stem/descriptor/server_descriptor.py
+++ b/stem/descriptor/server_descriptor.py
@@ -159,7 +159,7 @@ def _parse_file(descriptor_file: BinaryIO, is_bridge: bool = False, validate: bo
:raises:
* **ValueError** if the contents is malformed and validate is True
- * **IOError** if the file can't be read
+ * **OSError** if the file can't be read
"""
# Handler for relay descriptors
diff --git a/stem/descriptor/tordnsel.py b/stem/descriptor/tordnsel.py
index 6b9d4ceb..32cf1863 100644
--- a/stem/descriptor/tordnsel.py
+++ b/stem/descriptor/tordnsel.py
@@ -35,7 +35,7 @@ def _parse_file(tordnsel_file: BinaryIO, validate: bool = False, **kwargs: Any)
:raises:
* **ValueError** if the contents is malformed and validate is **True**
- * **IOError** if the file can't be read
+ * **OSError** if the file can't be read
"""
if kwargs:
diff --git a/stem/directory.py b/stem/directory.py
index 20982ccc..39abfb90 100644
--- a/stem/directory.py
+++ b/stem/directory.py
@@ -192,7 +192,7 @@ class Directory(object):
try:
authorities = stem.directory.Authority.from_remote()
- except IOError:
+ except OSError:
authorities = stem.directory.Authority.from_cache()
.. versionadded:: 1.5.0
@@ -205,7 +205,7 @@ class Directory(object):
:returns: **dict** of **str** identifiers to their
:class:`~stem.directory.Directory`
- :raises: **IOError** if unable to retrieve the fallback directories
+ :raises: **OSError** if unable to retrieve the fallback directories
"""
raise NotImplementedError('Unsupported Operation: this should be implemented by the Directory subclass')
@@ -251,7 +251,7 @@ class Authority(Directory):
lines = str_tools._to_unicode(urllib.request.urlopen(GITWEB_AUTHORITY_URL, timeout = timeout).read()).splitlines()
if not lines:
- raise IOError('no content')
+ raise OSError('no content')
except:
exc, stacktrace = sys.exc_info()[1:3]
message = "Unable to download tor's directory authorities from %s: %s" % (GITWEB_AUTHORITY_URL, exc)
@@ -280,7 +280,7 @@ class Authority(Directory):
v3ident = matches.get(AUTHORITY_V3IDENT), # type: ignore
)
except ValueError as exc:
- raise IOError(str(exc))
+ raise OSError(str(exc))
return results
@@ -373,7 +373,7 @@ class Fallback(Directory):
attr[attr_name] = conf.get(key)
if not attr[attr_name] and attr_name not in ('nickname', 'has_extrainfo', 'orport6_address', 'orport6_port'):
- raise IOError("'%s' is missing from %s" % (key, FALLBACK_CACHE_PATH))
+ raise OSError("'%s' is missing from %s" % (key, FALLBACK_CACHE_PATH))
if attr['orport6_address'] and attr['orport6_port']:
orport_v6 = (attr['orport6_address'], int(attr['orport6_port']))
@@ -399,7 +399,7 @@ class Fallback(Directory):
lines = str_tools._to_unicode(urllib.request.urlopen(GITWEB_FALLBACK_URL, timeout = timeout).read()).splitlines()
if not lines:
- raise IOError('no content')
+ raise OSError('no content')
except:
exc, stacktrace = sys.exc_info()[1:3]
message = "Unable to download tor's fallback directories from %s: %s" % (GITWEB_FALLBACK_URL, exc)
@@ -408,7 +408,7 @@ class Fallback(Directory):
# header metadata
if lines[0] != '/* type=fallback */':
- raise IOError('%s does not have a type field indicating it is fallback directory metadata' % GITWEB_FALLBACK_URL)
+ raise OSError('%s does not have a type field indicating it is fallback directory metadata' % GITWEB_FALLBACK_URL)
header = {}
@@ -418,7 +418,7 @@ class Fallback(Directory):
if mapping:
header[mapping.group(1)] = mapping.group(2)
else:
- raise IOError('Malformed fallback directory header line: %s' % line)
+ raise OSError('Malformed fallback directory header line: %s' % line)
Fallback._pop_section(lines) # skip human readable comments
@@ -446,7 +446,7 @@ class Fallback(Directory):
header = header,
)
except ValueError as exc:
- raise IOError(str(exc))
+ raise OSError(str(exc))
return results
diff --git a/stem/interpreter/__init__.py b/stem/interpreter/__init__.py
index 1445d9bb..53589bf8 100644
--- a/stem/interpreter/__init__.py
+++ b/stem/interpreter/__init__.py
@@ -137,7 +137,7 @@ def main() -> None:
try:
for line in open(args.run_path).readlines():
interpreter.run_command(line.strip(), print_response = True)
- except IOError as exc:
+ except OSError as exc:
print(format(msg('msg.unable_to_read_file', path = args.run_path, error = exc), *ERROR_OUTPUT))
sys.exit(1)
else:
diff --git a/stem/manual.py b/stem/manual.py
index 26249abc..dbde9827 100644
--- a/stem/manual.py
+++ b/stem/manual.py
@@ -100,7 +100,7 @@ CATEGORY_SECTIONS = collections.OrderedDict((
))
-class SchemaMismatch(IOError):
+class SchemaMismatch(OSError):
"""
Database schema doesn't match what Stem supports.
@@ -281,13 +281,13 @@ def download_man_page(path: Optional[str] = None, file_handle: Optional[BinaryIO
:param url: url to download tor's asciidoc manual from
:param timeout: seconds to wait before timing out the request
- :raises: **IOError** if unable to retrieve the manual
+ :raises: **OSError** if unable to retrieve the manual
"""
if not path and not file_handle:
raise ValueError("Either the path or file_handle we're saving to must be provided")
elif not stem.util.system.is_available('a2x'):
- raise IOError('We require a2x from asciidoc to provide a man page')
+ raise OSError('We require a2x from asciidoc to provide a man page')
with tempfile.TemporaryDirectory() as dirpath:
asciidoc_path = os.path.join(dirpath, 'tor.1.txt')
@@ -308,7 +308,7 @@ def download_man_page(path: Optional[str] = None, file_handle: Optional[BinaryIO
if not os.path.exists(manual_path):
raise OSError('no man page was generated')
except stem.util.system.CallError as exc:
- raise IOError("Unable to run '%s': %s" % (exc.command, stem.util.str_tools._to_unicode(exc.stderr)))
+ raise OSError("Unable to run '%s': %s" % (exc.command, stem.util.str_tools._to_unicode(exc.stderr)))
if path:
try:
@@ -319,7 +319,7 @@ def download_man_page(path: Optional[str] = None, file_handle: Optional[BinaryIO
shutil.copyfile(manual_path, path)
except OSError as exc:
- raise IOError(exc)
+ raise OSError(exc)
if file_handle:
with open(manual_path, 'rb') as manual_file:
@@ -385,7 +385,7 @@ class Manual(object):
:raises:
* **ImportError** if cache is sqlite and the sqlite3 module is
unavailable
- * **IOError** if a **path** was provided and we were unable to read
+ * **OSError** if a **path** was provided and we were unable to read
it or the schema is out of date
"""
@@ -398,7 +398,7 @@ class Manual(object):
path = CACHE_PATH
if not os.path.exists(path):
- raise IOError("%s doesn't exist" % path)
+ raise OSError("%s doesn't exist" % path)
with sqlite3.connect(path) as conn:
try:
@@ -409,7 +409,7 @@ class Manual(object):
name, synopsis, description, man_commit, stem_commit = conn.execute('SELECT name, synopsis, description, man_commit, stem_commit FROM metadata').fetchone()
except sqlite3.OperationalError as exc:
- raise IOError('Failed to read database metadata from %s: %s' % (path, exc))
+ raise OSError('Failed to read database metadata from %s: %s' % (path, exc))
commandline = dict(conn.execute('SELECT name, description FROM commandline').fetchall())
signals = dict(conn.execute('SELECT name, description FROM signals').fetchall())
@@ -442,7 +442,7 @@ class Manual(object):
:returns: :class:`~stem.manual.Manual` for the system's man page
- :raises: **IOError** if unable to retrieve the manual
+ :raises: **OSError** if unable to retrieve the manual
"""
man_cmd = 'man %s -P cat %s' % ('--encoding=ascii' if HAS_ENCODING_ARG else '', man_path)
@@ -450,7 +450,7 @@ class Manual(object):
try:
man_output = stem.util.system.call(man_cmd, env = {'MANWIDTH': '10000000'})
except OSError as exc:
- raise IOError("Unable to run '%s': %s" % (man_cmd, exc))
+ raise OSError("Unable to run '%s': %s" % (man_cmd, exc))
categories = _get_categories(man_output)
config_options = collections.OrderedDict() # type: collections.OrderedDict[str, stem.manual.ConfigOption]
@@ -484,7 +484,7 @@ class Manual(object):
try:
manual = stem.manual.from_remote()
- except IOError:
+ except OSError:
manual = stem.manual.from_cache()
In addition to our GitWeb dependency this requires 'a2x' which is part of
@@ -499,7 +499,7 @@ class Manual(object):
:returns: latest :class:`~stem.manual.Manual` available for tor
- :raises: **IOError** if unable to retrieve the manual
+ :raises: **OSError** if unable to retrieve the manual
"""
with tempfile.NamedTemporaryFile() as tmp:
@@ -519,7 +519,7 @@ class Manual(object):
:raises:
* **ImportError** if saving as sqlite and the sqlite3 module is
unavailable
- * **IOError** if unsuccessful
+ * **OSError** if unsuccessful
"""
try:
diff --git a/stem/util/conf.py b/stem/util/conf.py
index 1ac0d107..4760d7b4 100644
--- a/stem/util/conf.py
+++ b/stem/util/conf.py
@@ -267,7 +267,7 @@ def uses_settings(handle: str, path: str, lazy_load: bool = True) -> Callable:
:returns: **function** that can be used as a decorator to provide the
configuration
- :raises: **IOError** if we fail to read the configuration file, if
+ :raises: **OSError** if we fail to read the configuration file, if
**lazy_load** is true then this arises when we use the decorator
"""
@@ -416,7 +416,7 @@ class Config(object):
try:
user_config.load("/home/atagar/myConfig")
- except IOError as exc:
+ except OSError as exc:
print "Unable to load the user's config: %s" % exc
# This replace the contents of ssh_config with the values from the user's
@@ -485,7 +485,7 @@ class Config(object):
otherwise
:raises:
- * **IOError** if we fail to read the file (it doesn't exist, insufficient
+ * **OSError** if we fail to read the file (it doesn't exist, insufficient
permissions, etc)
* **ValueError** if no path was provided and we've never been provided one
"""
@@ -547,7 +547,7 @@ class Config(object):
:param path: location to be saved to
:raises:
- * **IOError** if we fail to save the file (insufficient permissions, etc)
+ * **OSError** if we fail to save the file (insufficient permissions, etc)
* **ValueError** if no path was provided and we've never been provided one
"""
diff --git a/stem/util/connection.py b/stem/util/connection.py
index f8a21f50..a83922f7 100644
--- a/stem/util/connection.py
+++ b/stem/util/connection.py
@@ -224,7 +224,7 @@ def get_connections(resolver: Optional['stem.util.connection.Resolver'] = None,
:raises:
* **ValueError** if neither a process_pid nor process_name is provided
- * **IOError** if no connections are available or resolution fails
+ * **OSError** if no connections are available or resolution fails
(generally they're indistinguishable). The common causes are the
command being unavailable or permissions.
"""
@@ -235,7 +235,7 @@ def get_connections(resolver: Optional['stem.util.connection.Resolver'] = None,
if available_resolvers:
resolver = available_resolvers[0]
else:
- raise IOError('Unable to determine a connection resolver')
+ raise OSError('Unable to determine a connection resolver')
if not process_pid and not process_name:
raise ValueError('You must provide a pid or process name to provide connections for')
@@ -258,12 +258,12 @@ def get_connections(resolver: Optional['stem.util.connection.Resolver'] = None,
if len(all_pids) == 0:
if resolver in (Resolver.NETSTAT_WINDOWS, Resolver.PROC, Resolver.BSD_PROCSTAT):
- raise IOError("Unable to determine the pid of '%s'. %s requires the pid to provide the connections." % (process_name, resolver))
+ raise OSError("Unable to determine the pid of '%s'. %s requires the pid to provide the connections." % (process_name, resolver))
elif len(all_pids) == 1:
process_pid = all_pids[0]
else:
if resolver in (Resolver.NETSTAT_WINDOWS, Resolver.PROC, Resolver.BSD_PROCSTAT):
- raise IOError("There's multiple processes named '%s'. %s requires a single pid to provide the connections." % (process_name, resolver))
+ raise OSError("There's multiple processes named '%s'. %s requires a single pid to provide the connections." % (process_name, resolver))
if resolver == Resolver.PROC:
return stem.util.proc.connections(pid = process_pid)
@@ -273,7 +273,7 @@ def get_connections(resolver: Optional['stem.util.connection.Resolver'] = None,
try:
results = stem.util.system.call(resolver_command)
except OSError as exc:
- raise IOError("Unable to query '%s': %s" % (resolver_command, exc))
+ raise OSError("Unable to query '%s': %s" % (resolver_command, exc))
resolver_regex_str = RESOLVER_FILTER[resolver].format(
protocol = '(?P<protocol>\\S+)',
@@ -330,7 +330,7 @@ def get_connections(resolver: Optional['stem.util.connection.Resolver'] = None,
_log('%i connections found' % len(connections))
if not connections:
- raise IOError('No results found using: %s' % resolver_command)
+ raise OSError('No results found using: %s' % resolver_command)
return connections
diff --git a/stem/util/proc.py b/stem/util/proc.py
index 802937b5..106c666b 100644
--- a/stem/util/proc.py
+++ b/stem/util/proc.py
@@ -108,7 +108,7 @@ def system_start_time() -> float:
:returns: **float** for the unix time of when the system started
- :raises: **IOError** if it can't be determined
+ :raises: **OSError** if it can't be determined
"""
start_time, parameter = time.time(), 'system start time'
@@ -119,7 +119,7 @@ def system_start_time() -> float:
_log_runtime(parameter, '/proc/stat[btime]', start_time)
return result
except:
- exc = IOError('unable to parse the /proc/stat btime entry: %s' % btime_line)
+ exc = OSError('unable to parse the /proc/stat btime entry: %s' % btime_line)
_log_failure(parameter, exc)
raise exc
@@ -131,7 +131,7 @@ def physical_memory() -> int:
:returns: **int** for the bytes of physical memory this system has
- :raises: **IOError** if it can't be determined
+ :raises: **OSError** if it can't be determined
"""
start_time, parameter = time.time(), 'system physical memory'
@@ -142,7 +142,7 @@ def physical_memory() -> int:
_log_runtime(parameter, '/proc/meminfo[MemTotal]', start_time)
return result
except:
- exc = IOError('unable to parse the /proc/meminfo MemTotal entry: %s' % mem_total_line)
+ exc = OSError('unable to parse the /proc/meminfo MemTotal entry: %s' % mem_total_line)
_log_failure(parameter, exc)
raise exc
@@ -155,7 +155,7 @@ def cwd(pid: int) -> str:
:returns: **str** with the path of the working directory for the process
- :raises: **IOError** if it can't be determined
+ :raises: **OSError** if it can't be determined
"""
start_time, parameter = time.time(), 'cwd'
@@ -167,7 +167,7 @@ def cwd(pid: int) -> str:
try:
cwd = os.readlink(proc_cwd_link)
except OSError:
- exc = IOError('unable to read %s' % proc_cwd_link)
+ exc = OSError('unable to read %s' % proc_cwd_link)
_log_failure(parameter, exc)
raise exc
@@ -183,7 +183,7 @@ def uid(pid: int) -> int:
:returns: **int** with the user id for the owner of the process
- :raises: **IOError** if it can't be determined
+ :raises: **OSError** if it can't be determined
"""
start_time, parameter = time.time(), 'uid'
@@ -195,7 +195,7 @@ def uid(pid: int) -> int:
_log_runtime(parameter, '%s[Uid]' % status_path, start_time)
return result
except:
- exc = IOError('unable to parse the %s Uid entry: %s' % (status_path, uid_line))
+ exc = OSError('unable to parse the %s Uid entry: %s' % (status_path, uid_line))
_log_failure(parameter, exc)
raise exc
@@ -209,7 +209,7 @@ def memory_usage(pid: int) -> Tuple[int, int]:
:returns: **tuple** of two ints with the memory usage of the process, of the
form **(resident_size, virtual_size)**
- :raises: **IOError** if it can't be determined
+ :raises: **OSError** if it can't be determined
"""
# checks if this is the kernel process
@@ -228,7 +228,7 @@ def memory_usage(pid: int) -> Tuple[int, int]:
_log_runtime(parameter, '%s[VmRSS|VmSize]' % status_path, start_time)
return (residentSize, virtualSize)
except:
- exc = IOError('unable to parse the %s VmRSS and VmSize entries: %s' % (status_path, ', '.join(mem_lines)))
+ exc = OSError('unable to parse the %s VmRSS and VmSize entries: %s' % (status_path, ', '.join(mem_lines)))
_log_failure(parameter, exc)
raise exc
@@ -243,11 +243,11 @@ def stats(pid: int, *stat_types: 'stem.util.proc.Stat') -> Sequence[str]:
:returns: **tuple** with all of the requested statistics as strings
- :raises: **IOError** if it can't be determined
+ :raises: **OSError** if it can't be determined
"""
if CLOCK_TICKS is None:
- raise IOError('Unable to look up SC_CLK_TCK')
+ raise OSError('Unable to look up SC_CLK_TCK')
start_time, parameter = time.time(), 'process %s' % ', '.join(stat_types)
@@ -266,7 +266,7 @@ def stats(pid: int, *stat_types: 'stem.util.proc.Stat') -> Sequence[str]:
stat_comp += stat_line[cmd_end + 1:].split()
if len(stat_comp) < 44 and _is_float(stat_comp[13], stat_comp[14], stat_comp[21]):
- exc = IOError('stat file had an unexpected format: %s' % stat_path)
+ exc = OSError('stat file had an unexpected format: %s' % stat_path)
_log_failure(parameter, exc)
raise exc
@@ -312,21 +312,21 @@ def file_descriptors_used(pid: int) -> int:
:returns: **int** of the number of file descriptors used
- :raises: **IOError** if it can't be determined
+ :raises: **OSError** if it can't be determined
"""
try:
pid = int(pid)
if pid < 0:
- raise IOError("Process pids can't be negative: %s" % pid)
+ raise OSError("Process pids can't be negative: %s" % pid)
except (ValueError, TypeError):
- raise IOError('Process pid was non-numeric: %s' % pid)
+ raise OSError('Process pid was non-numeric: %s' % pid)
try:
return len(os.listdir('/proc/%i/fd' % pid))
except Exception as exc:
- raise IOError('Unable to check number of file descriptors used: %s' % exc)
+ raise OSError('Unable to check number of file descriptors used: %s' % exc)
def connections(pid: Optional[int] = None, user: Optional[str] = None) -> Sequence['stem.util.connection.Connection']:
@@ -340,7 +340,7 @@ def connections(pid: Optional[int] = None, user: Optional[str] = None) -> Sequen
:returns: **list** of :class:`~stem.util.connection.Connection` instances
- :raises: **IOError** if it can't be determined
+ :raises: **OSError** if it can't be determined
"""
start_time, conn = time.time(), []
@@ -352,9 +352,9 @@ def connections(pid: Optional[int] = None, user: Optional[str] = None) -> Sequen
pid = int(pid)
if pid < 0:
- raise IOError("Process pids can't be negative: %s" % pid)
+ raise OSError("Process pids can't be negative: %s" % pid)
except (ValueError, TypeError):
- raise IOError('Process pid was non-numeric: %s' % pid)
+ raise OSError('Process pid was non-numeric: %s' % pid)
elif user:
parameter = 'connections for user %s' % user
else:
@@ -362,7 +362,7 @@ def connections(pid: Optional[int] = None, user: Optional[str] = None) -> Sequen
try:
if not IS_PWD_AVAILABLE:
- raise IOError("This requires python's pwd module, which is unavailable on Windows.")
+ raise OSError("This requires python's pwd module, which is unavailable on Windows.")
inodes = _inodes_for_sockets(pid) if pid else set()
process_uid = stem.util.str_tools._to_bytes(str(pwd.getpwnam(user).pw_uid)) if user else None
@@ -402,14 +402,14 @@ def connections(pid: Optional[int] = None, user: Optional[str] = None) -> Sequen
continue # no port
conn.append(stem.util.connection.Connection(l_addr, l_port, r_addr, r_port, protocol, is_ipv6))
- except IOError as exc:
- raise IOError("unable to read '%s': %s" % (proc_file_path, exc))
+ except OSError as exc:
+ raise OSError("unable to read '%s': %s" % (proc_file_path, exc))
except Exception as exc:
- raise IOError("unable to parse '%s': %s" % (proc_file_path, exc))
+ raise OSError("unable to parse '%s': %s" % (proc_file_path, exc))
_log_runtime(parameter, '/proc/net/[tcp|udp]', start_time)
return conn
- except IOError as exc:
+ except OSError as exc:
_log_failure(parameter, exc)
raise
@@ -422,7 +422,7 @@ def _inodes_for_sockets(pid: int) -> Set[bytes]:
:returns: **set** with inodes for its sockets
- :raises: **IOError** if it can't be determined
+ :raises: **OSError** if it can't be determined
"""
inodes = set()
@@ -430,7 +430,7 @@ def _inodes_for_sockets(pid: int) -> Set[bytes]:
try:
fd_contents = os.listdir('/proc/%s/fd' % pid)
except OSError as exc:
- raise IOError('Unable to read our file descriptors: %s' % exc)
+ raise OSError('Unable to read our file descriptors: %s' % exc)
for fd in fd_contents:
fd_path = '/proc/%s/fd/%s' % (pid, fd)
@@ -447,7 +447,7 @@ def _inodes_for_sockets(pid: int) -> Set[bytes]:
continue # descriptors may shift while we're in the middle of iterating over them
# most likely couldn't be read due to permissions
- raise IOError('unable to determine file descriptor destination (%s): %s' % (exc, fd_path))
+ raise OSError('unable to determine file descriptor destination (%s): %s' % (exc, fd_path))
return inodes
@@ -519,7 +519,7 @@ def _get_lines(file_path: str, line_prefixes: Sequence[str], parameter: str) ->
:returns: mapping of prefixes to the matching line
- :raises: **IOError** if unable to read the file or can't find all of the prefixes
+ :raises: **OSError** if unable to read the file or can't find all of the prefixes
"""
try:
@@ -544,10 +544,10 @@ def _get_lines(file_path: str, line_prefixes: Sequence[str], parameter: str) ->
else:
msg = '%s did not contain %s entries' % (file_path, ', '.join(remaining_prefixes))
- raise IOError(msg)
+ raise OSError(msg)
else:
return results
- except IOError as exc:
+ except OSError as exc:
_log_failure(parameter, exc)
raise
diff --git a/stem/util/system.py b/stem/util/system.py
index b0130fb1..5d3e3e2b 100644
--- a/stem/util/system.py
+++ b/stem/util/system.py
@@ -525,7 +525,7 @@ def name_by_pid(pid: int) -> Optional[str]:
if stem.util.proc.is_available():
try:
process_name = stem.util.proc.stats(pid, stem.util.proc.Stat.COMMAND)[0]
- except IOError:
+ except OSError:
pass
# attempts to resolve using ps, failing if:
@@ -923,7 +923,7 @@ def cwd(pid: int) -> Optional[str]:
if stem.util.proc.is_available():
try:
return stem.util.proc.cwd(pid)
- except IOError:
+ except OSError:
pass
# Fall back to a pwdx query. This isn't available on BSD.
@@ -1024,7 +1024,7 @@ def start_time(pid: str) -> Optional[float]:
if stem.util.proc.is_available():
try:
return float(stem.util.proc.stats(pid, stem.util.proc.Stat.START_TIME)[0])
- except IOError:
+ except OSError:
pass
try:
@@ -1053,7 +1053,7 @@ def tail(target: Union[str, BinaryIO], lines: Optional[int] = None) -> Iterator[
:returns: **generator** that reads lines, starting with the end
- :raises: **IOError** if unable to read the file
+ :raises: **OSError** if unable to read the file
"""
if isinstance(target, str):
@@ -1163,7 +1163,7 @@ def is_tarfile(path: str) -> bool:
# Checking if it's a tar file may fail due to permissions so failing back
# to the mime type...
#
- # IOError: [Errno 13] Permission denied: '/vmlinuz.old'
+ # OSError: [Errno 13] Permission denied: '/vmlinuz.old'
#
# With python 3 insuffient permissions raises an AttributeError instead...
#
@@ -1171,7 +1171,7 @@ def is_tarfile(path: str) -> bool:
try:
return tarfile.is_tarfile(path)
- except (IOError, AttributeError):
+ except (OSError, AttributeError):
return mimetypes.guess_type(path)[0] == 'application/x-tar'
@@ -1402,7 +1402,7 @@ def set_process_name(process_name: str) -> None:
:param process_name: new name for our process
- :raises: **IOError** if the process cannot be renamed
+ :raises: **OSError** if the process cannot be renamed
"""
# This is mostly based on...
@@ -1448,7 +1448,7 @@ def _set_argv(process_name: str) -> None:
Py_GetArgcArgv(argv, ctypes.pointer(argc))
if len(process_name) > _MAX_NAME_LENGTH:
- raise IOError("Can't rename process to something longer than our initial name (this would overwrite memory used for the env)")
+ raise OSError("Can't rename process to something longer than our initial name (this would overwrite memory used for the env)")
# space we need to clear
zero_size = max(len(current_name), len(process_name))
diff --git a/stem/version.py b/stem/version.py
index 9ed5fa37..cd2c3c39 100644
--- a/stem/version.py
+++ b/stem/version.py
@@ -60,7 +60,7 @@ def get_system_tor_version(tor_cmd: str = 'tor') -> 'stem.version.Version':
:returns: :class:`~stem.version.Version` provided by the tor command
- :raises: **IOError** if unable to query or parse the version
+ :raises: **OSError** if unable to query or parse the version
"""
if tor_cmd not in VERSION_CACHE:
@@ -73,11 +73,11 @@ def get_system_tor_version(tor_cmd: str = 'tor') -> 'stem.version.Version':
if 'No such file or directory' in str(exc):
if os.path.isabs(tor_cmd):
- raise IOError("Unable to check tor's version. '%s' doesn't exist." % tor_cmd)
+ raise OSError("Unable to check tor's version. '%s' doesn't exist." % tor_cmd)
else:
- raise IOError("Unable to run '%s'. Maybe tor isn't in your PATH?" % version_cmd)
+ raise OSError("Unable to run '%s'. Maybe tor isn't in your PATH?" % version_cmd)
- raise IOError(exc)
+ raise OSError(exc)
for line in version_output:
# output example:
@@ -90,10 +90,10 @@ def get_system_tor_version(tor_cmd: str = 'tor') -> 'stem.version.Version':
VERSION_CACHE[tor_cmd] = Version(version_str)
break
except ValueError as exc:
- raise IOError(exc)
+ raise OSError(exc)
if tor_cmd not in VERSION_CACHE:
- raise IOError("'%s' didn't provide a parseable version:\n\n%s" % (version_cmd, '\n'.join(version_output)))
+ raise OSError("'%s' didn't provide a parseable version:\n\n%s" % (version_cmd, '\n'.join(version_output)))
return VERSION_CACHE[tor_cmd]
diff --git a/test/integ/util/connection.py b/test/integ/util/connection.py
index 3e22667e..d2401aab 100644
--- a/test/integ/util/connection.py
+++ b/test/integ/util/connection.py
@@ -69,7 +69,7 @@ class TestConnection(unittest.TestCase):
def test_connections_by_ss(self):
try:
self.check_resolver(Resolver.SS)
- except (IOError, OSError):
+ except OSError:
self.skipTest('(ticket 27479)')
def test_connections_by_lsof(self):
diff --git a/test/integ/version.py b/test/integ/version.py
index 0df48646..1241fcfd 100644
--- a/test/integ/version.py
+++ b/test/integ/version.py
@@ -25,10 +25,10 @@ class TestVersion(unittest.TestCase):
stem.version.get_system_tor_version()
# try running against a command that exists, but isn't tor
- self.assertRaises(IOError, stem.version.get_system_tor_version, 'ls')
+ self.assertRaises(OSError, stem.version.get_system_tor_version, 'ls')
# try running against a command that doesn't exist
- self.assertRaises(IOError, stem.version.get_system_tor_version, 'blarg')
+ self.assertRaises(OSError, stem.version.get_system_tor_version, 'blarg')
@test.require.controller
@async_test
diff --git a/test/unit/descriptor/collector.py b/test/unit/descriptor/collector.py
index 2960cf53..3f053a02 100644
--- a/test/unit/descriptor/collector.py
+++ b/test/unit/descriptor/collector.py
@@ -99,16 +99,16 @@ class TestCollector(unittest.TestCase):
@patch('urllib.request.urlopen')
def test_index_retries(self, urlopen_mock):
- urlopen_mock.side_effect = IOError('boom')
+ urlopen_mock.side_effect = OSError('boom')
collector = CollecTor(retries = 0)
- self.assertRaisesRegexp(IOError, 'boom', collector.index)
+ self.assertRaisesRegexp(OSError, 'boom', collector.index)
self.assertEqual(1, urlopen_mock.call_count)
urlopen_mock.reset_mock()
collector = CollecTor(retries = 4)
- self.assertRaisesRegexp(IOError, 'boom', collector.index)
+ self.assertRaisesRegexp(OSError, 'boom', collector.index)
self.assertEqual(5, urlopen_mock.call_count)
@patch('urllib.request.urlopen', Mock(return_value = io.BytesIO(b'not json')))
@@ -123,7 +123,7 @@ class TestCollector(unittest.TestCase):
with patch('urllib.request.urlopen', Mock(return_value = io.BytesIO(b'not compressed'))):
collector = CollecTor()
- self.assertRaisesRegexp(IOError, 'Failed to decompress as %s' % compression, collector.index, compression)
+ self.assertRaisesRegexp(OSError, 'Failed to decompress as %s' % compression, collector.index, compression)
@patch('stem.descriptor.collector.CollecTor.index', Mock(return_value = EXAMPLE_INDEX))
def test_files(self):
diff --git a/test/unit/directory/fallback.py b/test/unit/directory/fallback.py
index 7b8f6da8..878218b8 100644
--- a/test/unit/directory/fallback.py
+++ b/test/unit/directory/fallback.py
@@ -117,11 +117,11 @@ class TestFallback(unittest.TestCase):
@patch('urllib.request.urlopen', Mock(return_value = io.BytesIO(b'\n'.join(FALLBACK_GITWEB_CONTENT.splitlines()[1:]))))
def test_from_remote_no_header(self):
- self.assertRaisesRegexp(IOError, 'does not have a type field indicating it is fallback directory metadata', stem.directory.Fallback.from_remote)
+ self.assertRaisesRegexp(OSError, 'does not have a type field indicating it is fallback directory metadata', stem.directory.Fallback.from_remote)
@patch('urllib.request.urlopen', Mock(return_value = io.BytesIO(FALLBACK_GITWEB_CONTENT.replace(b'version=2.0.0', b'version'))))
def test_from_remote_malformed_header(self):
- self.assertRaisesRegexp(IOError, 'Malformed fallback directory header line: /\\* version \\*/', stem.directory.Fallback.from_remote)
+ self.assertRaisesRegexp(OSError, 'Malformed fallback directory header line: /\\* version \\*/', stem.directory.Fallback.from_remote)
def test_from_remote_malformed(self):
test_values = {
@@ -135,7 +135,7 @@ class TestFallback(unittest.TestCase):
for entry, expected in test_values.items():
with patch('urllib.request.urlopen', Mock(return_value = io.BytesIO(entry))):
- self.assertRaisesRegexp(IOError, re.escape(expected), stem.directory.Fallback.from_remote)
+ self.assertRaisesRegexp(OSError, re.escape(expected), stem.directory.Fallback.from_remote)
def test_persistence(self):
expected = {
diff --git a/test/unit/manual.py b/test/unit/manual.py
index 9d842972..d290f332 100644
--- a/test/unit/manual.py
+++ b/test/unit/manual.py
@@ -238,14 +238,14 @@ class TestManual(unittest.TestCase):
@patch('stem.util.system.is_available', Mock(return_value = False))
def test_download_man_page_requires_a2x(self):
exc_msg = 'We require a2x from asciidoc to provide a man page'
- self.assertRaisesWith(IOError, exc_msg, stem.manual.download_man_page, '/tmp/no_such_file')
+ self.assertRaisesWith(OSError, exc_msg, stem.manual.download_man_page, '/tmp/no_such_file')
@patch('tempfile.TemporaryDirectory', Mock(return_value = TEMP_DIR_MOCK))
- @patch('stem.manual.open', Mock(side_effect = IOError('unable to write to file')), create = True)
+ @patch('stem.manual.open', Mock(side_effect = OSError('unable to write to file')), create = True)
@patch('stem.util.system.is_available', Mock(return_value = True))
def test_download_man_page_when_unable_to_write(self):
exc_msg = "Unable to download tor's manual from https://gitweb.torproject.org/tor.git/plain/doc/man/tor.1.txt to /no/such/path/tor.1.txt: unable to write to file"
- self.assertRaisesWith(IOError, exc_msg, stem.manual.download_man_page, '/tmp/no_such_file')
+ self.assertRaisesWith(OSError, exc_msg, stem.manual.download_man_page, '/tmp/no_such_file')
@patch('tempfile.TemporaryDirectory', Mock(return_value = TEMP_DIR_MOCK))
@patch('stem.manual.open', Mock(return_value = io.BytesIO()), create = True)
@@ -253,7 +253,7 @@ class TestManual(unittest.TestCase):
@patch('urllib.request.urlopen', Mock(side_effect = urllib.request.URLError('<urlopen error [Errno -2] Name or service not known>')))
def test_download_man_page_when_download_fails(self):
exc_msg = "Unable to download tor's manual from https://www.atagar.com/foo/bar to /no/such/path/tor.1.txt: <urlopen error <urlopen error [Errno -2] Name or service not known>>"
- self.assertRaisesWith(IOError, exc_msg, stem.manual.download_man_page, '/tmp/no_such_file', url = 'https://www.atagar.com/foo/bar')
+ self.assertRaisesWith(OSError, exc_msg, stem.manual.download_man_page, '/tmp/no_such_file', url = 'https://www.atagar.com/foo/bar')
@patch('tempfile.TemporaryDirectory', Mock(return_value = TEMP_DIR_MOCK))
@patch('stem.manual.open', Mock(return_value = io.BytesIO()), create = True)
@@ -262,7 +262,7 @@ class TestManual(unittest.TestCase):
@patch('urllib.request.urlopen', Mock(return_value = io.BytesIO(b'test content')))
def test_download_man_page_when_a2x_fails(self):
exc_msg = "Unable to run 'a2x -f manpage /no/such/path/tor.1.txt': call failed"
- self.assertRaisesWith(IOError, exc_msg, stem.manual.download_man_page, '/tmp/no_such_file', url = 'https://www.atagar.com/foo/bar')
+ self.assertRaisesWith(OSError, exc_msg, stem.manual.download_man_page, '/tmp/no_such_file', url = 'https://www.atagar.com/foo/bar')
@patch('tempfile.TemporaryDirectory', Mock(return_value = TEMP_DIR_MOCK))
@patch('stem.manual.open', create = True)
@@ -290,7 +290,7 @@ class TestManual(unittest.TestCase):
@patch('stem.util.system.call', Mock(side_effect = OSError('man --encoding=ascii -P cat tor returned exit status 16')))
def test_from_man_when_manual_is_unavailable(self):
exc_msg = "Unable to run 'man --encoding=ascii -P cat tor': man --encoding=ascii -P cat tor returned exit status 16"
- self.assertRaisesWith(IOError, exc_msg, stem.manual.Manual.from_man)
+ self.assertRaisesWith(OSError, exc_msg, stem.manual.Manual.from_man)
@patch('stem.util.system.call', Mock(return_value = []))
def test_when_man_is_empty(self):
diff --git a/test/unit/util/connection.py b/test/unit/util/connection.py
index 848ce989..575667b4 100644
--- a/test/unit/util/connection.py
+++ b/test/unit/util/connection.py
@@ -181,12 +181,12 @@ class TestConnection(unittest.TestCase):
def test_download_retries(self, urlopen_mock):
urlopen_mock.side_effect = urllib.request.URLError('boom')
- self.assertRaisesRegexp(IOError, 'boom', stem.util.connection.download, URL)
+ self.assertRaisesRegexp(OSError, 'boom', stem.util.connection.download, URL)
self.assertEqual(1, urlopen_mock.call_count)
urlopen_mock.reset_mock()
- self.assertRaisesRegexp(IOError, 'boom', stem.util.connection.download, URL, retries = 4)
+ self.assertRaisesRegexp(OSError, 'boom', stem.util.connection.download, URL, retries = 4)
self.assertEqual(5, urlopen_mock.call_count)
@patch('os.access')
@@ -249,8 +249,8 @@ class TestConnection(unittest.TestCase):
self.assertEqual(expected, stem.util.connection.get_connections(Resolver.PROC, process_pid = 1111))
- proc_mock.side_effect = IOError('No connections for you!')
- self.assertRaises(IOError, stem.util.connection.get_connections, Resolver.PROC, process_pid = 1111)
+ proc_mock.side_effect = OSError('No connections for you!')
+ self.assertRaises(OSError, stem.util.connection.get_connections, Resolver.PROC, process_pid = 1111)
@patch('stem.util.system.call')
def test_get_connections_by_netstat(self, call_mock):
@@ -262,11 +262,11 @@ class TestConnection(unittest.TestCase):
expected = [Connection('192.168.0.1', 44284, '38.229.79.2', 443, 'tcp', False)]
self.assertEqual(expected, stem.util.connection.get_connections(Resolver.NETSTAT, process_pid = 15843, process_name = 'tor'))
- self.assertRaises(IOError, stem.util.connection.get_connections, Resolver.NETSTAT, process_pid = 15843, process_name = 'stuff')
- self.assertRaises(IOError, stem.util.connection.get_connections, Resolver.NETSTAT, process_pid = 1111, process_name = 'tor')
+ self.assertRaises(OSError, stem.util.connection.get_connections, Resolver.NETSTAT, process_pid = 15843, process_name = 'stuff')
+ self.assertRaises(OSError, stem.util.connection.get_connections, Resolver.NETSTAT, process_pid = 1111, process_name = 'tor')
call_mock.side_effect = OSError('Unable to call netstat')
- self.assertRaises(IOError, stem.util.connection.get_connections, Resolver.NETSTAT, process_pid = 1111)
+ self.assertRaises(OSError, stem.util.connection.get_connections, Resolver.NETSTAT, process_pid = 1111)
@patch('stem.util.system.call', Mock(return_value = NETSTAT_IPV6_OUTPUT.split('\n')))
def test_get_connections_by_netstat_ipv6(self):
@@ -291,10 +291,10 @@ class TestConnection(unittest.TestCase):
expected = [Connection('192.168.0.1', 44284, '38.229.79.2', 443, 'tcp', False)]
self.assertEqual(expected, stem.util.connection.get_connections(Resolver.NETSTAT_WINDOWS, process_pid = 15843, process_name = 'tor'))
- self.assertRaises(IOError, stem.util.connection.get_connections, Resolver.NETSTAT_WINDOWS, process_pid = 1111, process_name = 'tor')
+ self.assertRaises(OSError, stem.util.connection.get_connections, Resolver.NETSTAT_WINDOWS, process_pid = 1111, process_name = 'tor')
call_mock.side_effect = OSError('Unable to call netstat')
- self.assertRaises(IOError, stem.util.connection.get_connections, Resolver.NETSTAT_WINDOWS, process_pid = 1111)
+ self.assertRaises(OSError, stem.util.connection.get_connections, Resolver.NETSTAT_WINDOWS, process_pid = 1111)
@patch('stem.util.system.call')
def test_get_connections_by_ss(self, call_mock):
@@ -309,11 +309,11 @@ class TestConnection(unittest.TestCase):
]
self.assertEqual(expected, stem.util.connection.get_connections(Resolver.SS, process_pid = 15843, process_name = 'tor'))
- self.assertRaises(IOError, stem.util.connection.get_connections, Resolver.SS, process_pid = 15843, process_name = 'stuff')
- self.assertRaises(IOError, stem.util.connection.get_connections, Resolver.SS, process_pid = 1111, process_name = 'tor')
+ self.assertRaises(OSError, stem.util.connection.get_connections, Resolver.SS, process_pid = 15843, process_name = 'stuff')
+ self.assertRaises(OSError, stem.util.connection.get_connections, Resolver.SS, process_pid = 1111, process_name = 'tor')
call_mock.side_effect = OSError('Unable to call ss')
- self.assertRaises(IOError, stem.util.connection.get_connections, Resolver.SS, process_pid = 1111)
+ self.assertRaises(OSError, stem.util.connection.get_connections, Resolver.SS, process_pid = 1111)
@patch('stem.util.system.call', Mock(return_value = SS_IPV6_OUTPUT.split('\n')))
def test_get_connections_by_ss_ipv6(self):
@@ -345,11 +345,11 @@ class TestConnection(unittest.TestCase):
]
self.assertEqual(expected, stem.util.connection.get_connections(Resolver.LSOF, process_pid = 15843, process_name = 'tor'))
- self.assertRaises(IOError, stem.util.connection.get_connections, Resolver.LSOF, process_pid = 15843, process_name = 'stuff')
- self.assertRaises(IOError, stem.util.connection.get_connections, Resolver.LSOF, process_pid = 1111, process_name = 'tor')
+ self.assertRaises(OSError, stem.util.connection.get_connections, Resolver.LSOF, process_pid = 15843, process_name = 'stuff')
+ self.assertRaises(OSError, stem.util.connection.get_connections, Resolver.LSOF, process_pid = 1111, process_name = 'tor')
call_mock.side_effect = OSError('Unable to call lsof')
- self.assertRaises(IOError, stem.util.connection.get_connections, Resolver.LSOF, process_pid = 1111)
+ self.assertRaises(OSError, stem.util.connection.get_connections, Resolver.LSOF, process_pid = 1111)
@patch('stem.util.system.call', Mock(return_value = LSOF_IPV6_OUTPUT.split('\n')))
def test_get_connections_by_lsof_ipv6(self):
@@ -392,11 +392,11 @@ class TestConnection(unittest.TestCase):
]
self.assertEqual(expected, stem.util.connection.get_connections(Resolver.BSD_SOCKSTAT, process_pid = 4397, process_name = 'tor'))
- self.assertRaises(IOError, stem.util.connection.get_connections, Resolver.BSD_SOCKSTAT, process_pid = 4397, process_name = 'stuff')
- self.assertRaises(IOError, stem.util.connection.get_connections, Resolver.BSD_SOCKSTAT, process_pid = 1111, process_name = 'tor')
+ self.assertRaises(OSError, stem.util.connection.get_connections, Resolver.BSD_SOCKSTAT, process_pid = 4397, process_name = 'stuff')
+ self.assertRaises(OSError, stem.util.connection.get_connections, Resolver.BSD_SOCKSTAT, process_pid = 1111, process_name = 'tor')
call_mock.side_effect = OSError('Unable to call sockstat')
- self.assertRaises(IOError, stem.util.connection.get_connections, Resolver.BSD_SOCKSTAT, process_pid = 1111)
+ self.assertRaises(OSError, stem.util.connection.get_connections, Resolver.BSD_SOCKSTAT, process_pid = 1111)
@patch('stem.util.system.call')
def test_get_connections_by_procstat(self, call_mock):
@@ -413,11 +413,11 @@ class TestConnection(unittest.TestCase):
]
self.assertEqual(expected, stem.util.connection.get_connections(Resolver.BSD_PROCSTAT, process_pid = 3561, process_name = 'tor'))
- self.assertRaises(IOError, stem.util.connection.get_connections, Resolver.BSD_PROCSTAT, process_pid = 3561, process_name = 'stuff')
- self.assertRaises(IOError, stem.util.connection.get_connections, Resolver.BSD_PROCSTAT, process_pid = 1111, process_name = 'tor')
+ self.assertRaises(OSError, stem.util.connection.get_connections, Resolver.BSD_PROCSTAT, process_pid = 3561, process_name = 'stuff')
+ self.assertRaises(OSError, stem.util.connection.get_connections, Resolver.BSD_PROCSTAT, process_pid = 1111, process_name = 'tor')
call_mock.side_effect = OSError('Unable to call procstat')
- self.assertRaises(IOError, stem.util.connection.get_connections, Resolver.BSD_PROCSTAT, process_pid = 1111)
+ self.assertRaises(OSError, stem.util.connection.get_connections, Resolver.BSD_PROCSTAT, process_pid = 1111)
@patch('stem.util.system.call')
def test_get_connections_by_fstat(self, call_mock):
@@ -432,11 +432,11 @@ class TestConnection(unittest.TestCase):
]
self.assertEqual(expected, stem.util.connection.get_connections(Resolver.BSD_FSTAT, process_pid = 15843, process_name = 'tor'))
- self.assertRaises(IOError, stem.util.connection.get_connections, Resolver.BSD_FSTAT, process_pid = 15843, process_name = 'stuff')
- self.assertRaises(IOError, stem.util.connection.get_connections, Resolver.BSD_FSTAT, process_pid = 1111, process_name = 'tor')
+ self.assertRaises(OSError, stem.util.connection.get_connections, Resolver.BSD_FSTAT, process_pid = 15843, process_name = 'stuff')
+ self.assertRaises(OSError, stem.util.connection.get_connections, Resolver.BSD_FSTAT, process_pid = 1111, process_name = 'tor')
call_mock.side_effect = OSError('Unable to call fstat')
- self.assertRaises(IOError, stem.util.connection.get_connections, Resolver.BSD_FSTAT, process_pid = 1111)
+ self.assertRaises(OSError, stem.util.connection.get_connections, Resolver.BSD_FSTAT, process_pid = 1111)
def test_is_valid_ipv4_address(self):
"""
diff --git a/test/unit/util/proc.py b/test/unit/util/proc.py
index b5667c62..31fd742c 100644
--- a/test/unit/util/proc.py
+++ b/test/unit/util/proc.py
@@ -174,7 +174,7 @@ class TestProc(unittest.TestCase):
# check that we reject bad pids
for arg in (None, -100, 'hello',):
- self.assertRaises(IOError, proc.file_descriptors_used, arg)
+ self.assertRaises(OSError, proc.file_descriptors_used, arg)
# when proc directory doesn't exist
@@ -182,7 +182,7 @@ class TestProc(unittest.TestCase):
listdir_mock.side_effect = OSError(error_msg)
exc_msg = 'Unable to check number of file descriptors used: %s' % error_msg
- self.assertRaisesWith(IOError, exc_msg, proc.file_descriptors_used, 2118)
+ self.assertRaisesWith(OSError, exc_msg, proc.file_descriptors_used, 2118)
# successful calls
diff --git a/test/unit/util/system.py b/test/unit/util/system.py
index 042c1bd1..fe501389 100644
--- a/test/unit/util/system.py
+++ b/test/unit/util/system.py
@@ -402,11 +402,11 @@ class TestSystem(unittest.TestCase):
self.assertEqual(14, len(list(system.tail(path))))
self.assertEqual(14, len(list(system.tail(path, 200))))
- self.assertRaises(IOError, list, system.tail('/path/doesnt/exist'))
+ self.assertRaises(OSError, list, system.tail('/path/doesnt/exist'))
fd, temp_path = tempfile.mkstemp()
os.chmod(temp_path, 0o077) # remove read permissions
- self.assertRaises(IOError, list, system.tail(temp_path))
+ self.assertRaises(OSError, list, system.tail(temp_path))
os.close(fd)
os.remove(temp_path)
diff --git a/test/unit/version.py b/test/unit/version.py
index 33439cf8..70109aca 100644
--- a/test/unit/version.py
+++ b/test/unit/version.py
@@ -58,7 +58,7 @@ class TestVersion(unittest.TestCase):
Tor version output that doesn't include a version within it.
"""
- self.assertRaisesRegexp(IOError, "'tor_unit --version' didn't provide a parseable version", stem.version.get_system_tor_version, 'tor_unit')
+ self.assertRaisesRegexp(OSError, "'tor_unit --version' didn't provide a parseable version", stem.version.get_system_tor_version, 'tor_unit')
@patch('stem.util.system.call', Mock(return_value = MALFORMED_TOR_VERSION.splitlines()))
@patch.dict(stem.version.VERSION_CACHE)
@@ -68,7 +68,7 @@ class TestVersion(unittest.TestCase):
version.
"""
- self.assertRaisesWith(IOError, "'0.2.blah (git-73ff13ab3cc9570d)' isn't a properly formatted tor version", stem.version.get_system_tor_version, 'tor_unit')
+ self.assertRaisesWith(OSError, "'0.2.blah (git-73ff13ab3cc9570d)' isn't a properly formatted tor version", stem.version.get_system_tor_version, 'tor_unit')
def test_parsing(self):
"""
1
0
commit 7ef8c64833d41337d5c9cc5baaee2808092c9aad
Author: Philipp Winter <phw(a)nymity.ch>
Date: Fri Jun 26 10:00:29 2020 -0700
Make models more configurable.
This patch removes the --oneshot subcommand and replaces it with several
new subcommands for OnionPerf's "measure" command:
--tgen-start-pause (Initial pause before file transfers.)
--tgen-num-transfers (Number of file transfers.)
--tgen-intertransfer-pause (Pause in between file transfers.)
--tgen-transfer-size (Size of each file transfer.)
By default, OnionPerf continues to run in "continuous" mode. One can
simulate oneshot mode by running onionperf with the following flags:
onionperf measure --tgen-num-transfers=1
In addition to the above subcommands, this patch improves the code base
by 1) adding a TGenConf class to hold TGen's configuration and by 2)
adding a TGenModelConf class to hold TGen's traffic model.
This fixes tpo/metrics/onionperf#33432.
---
onionperf/measurement.py | 102 +++++++++++++++++++---------------
onionperf/model.py | 108 +++++++++++++++++++++++-------------
onionperf/onionperf | 60 +++++++++++++++-----
onionperf/tests/test_measurement.py | 12 ++--
4 files changed, 175 insertions(+), 107 deletions(-)
diff --git a/onionperf/measurement.py b/onionperf/measurement.py
index af1fa0d..e2d8d1c 100644
--- a/onionperf/measurement.py
+++ b/onionperf/measurement.py
@@ -15,6 +15,16 @@ from stem.control import Controller
from stem.version import Version, Requirement, get_system_tor_version
from stem import __version__ as stem_version
+class TGenConf(object):
+ """Represents a TGen configuration, for both client and server."""
+ def __init__(self, listen_port=None, connect_ip=None, connect_port=None, tor_ctl_port=None, tor_socks_port=None):
+ self.listen_port = str(listen_port)
+ self.tor_ctl_port = tor_ctl_port
+ self.tor_socks_port = tor_socks_port
+ # TGen clients use connect_ip and connect_port.
+ self.connect_ip = connect_ip
+ self.connect_port = connect_port
+
# onionperf imports
from . import analysis, monitor, model, util
@@ -173,12 +183,11 @@ def logrotate_thread_task(writables, tgen_writable, torctl_writable, docroot, ni
class Measurement(object):
- def __init__(self, tor_bin_path, tgen_bin_path, datadir_path, privatedir_path, nickname, oneshot, additional_client_conf=None, torclient_conf_file=None, torserver_conf_file=None, single_onion=False):
+ def __init__(self, tor_bin_path, tgen_bin_path, datadir_path, privatedir_path, nickname, additional_client_conf=None, torclient_conf_file=None, torserver_conf_file=None, single_onion=False):
self.tor_bin_path = tor_bin_path
self.tgen_bin_path = tgen_bin_path
self.datadir_path = datadir_path
self.privatedir_path = privatedir_path
- self.oneshot = oneshot
self.nickname = nickname
self.threads = None
self.done_event = None
@@ -190,20 +199,30 @@ class Measurement(object):
self.torserver_conf_file = torserver_conf_file
self.single_onion = single_onion
- def run(self, do_onion=True, do_inet=True, client_tgen_listen_port=58888, client_tgen_connect_ip='0.0.0.0', client_tgen_connect_port=8080, client_tor_ctl_port=59050, client_tor_socks_port=59000,
- server_tgen_listen_port=8080, server_tor_ctl_port=59051, server_tor_socks_port=59001):
+ def run(self, do_onion=True, do_inet=True, tgen_model=None, tgen_client_conf=None, tgen_server_conf=None):
'''
- only `server_tgen_listen_port` are "public" and need to be opened on the firewall.
- if `client_tgen_connect_port` != `server_tgen_listen_port`, then you should have installed a forwarding rule in the firewall.
+ only `tgen_server_conf.listen_port` are "public" and need to be opened on the firewall.
+ if `tgen_client_conf.connect_port` != `tgen_server_conf.listen_port`, then you should have installed a forwarding rule in the firewall.
all ports need to be unique though, and unique among multiple onionperf instances.
here are some sane defaults:
- client_tgen_listen_port=58888, client_tgen_connect_port=8080, client_tor_ctl_port=59050, client_tor_socks_port=59000,
- server_tgen_listen_port=8080, server_tor_ctl_port=59051, server_tor_socks_port=59001
+ tgen_client_conf.listen_port=58888, tgen_client_conf.connect_port=8080, tgen_client_conf.tor_ctl_port=59050, tgen_client_conf.tor_socks_port=59000,
+ tgen_server_conf.listen_port=8080, tgen_server_conf.tor_ctl_port=59051, tgen_server_conf.tor_socks_port=59001
'''
self.threads = []
self.done_event = threading.Event()
+ if tgen_client_conf is None:
+ tgen_client_conf = TGenConf(listen_port=58888,
+ connect_ip='0.0.0.0',
+ connect_port=8080,
+ tor_ctl_port=59050,
+ tor_socks_port=59000)
+ if tgen_server_conf is None:
+ tgen_server_conf = TGenConf(listen_port=8080,
+ tor_ctl_port=59051,
+ tor_socks_port=59001)
+
# if ctrl-c is pressed, shutdown child processes properly
try:
# make sure stem and Tor supports ephemeral HS (version >= 0.2.7.1-alpha)
@@ -225,52 +244,53 @@ class Measurement(object):
tgen_client_writable, torctl_client_writable = None, None
if do_onion or do_inet:
- general_writables.append(self.__start_tgen_server(server_tgen_listen_port))
+ tgen_model.port = tgen_server_conf.listen_port
+ general_writables.append(self.__start_tgen_server(tgen_model))
if do_onion:
logging.info("Onion Service private keys will be placed in {0}".format(self.privatedir_path))
# one must not have an open socks port when running a single
# onion service. see tor's man page for more information.
if self.single_onion:
- server_tor_socks_port = 0
- tor_writable, torctl_writable = self.__start_tor_server(server_tor_ctl_port,
- server_tor_socks_port,
- {client_tgen_connect_port:server_tgen_listen_port})
+ tgen_server_conf.tor_socks_port = 0
+ tor_writable, torctl_writable = self.__start_tor_server(tgen_server_conf.tor_ctl_port,
+ tgen_server_conf.tor_socks_port,
+ {tgen_client_conf.connect_port:tgen_server_conf.listen_port})
general_writables.append(tor_writable)
general_writables.append(torctl_writable)
if do_onion or do_inet:
- tor_writable, torctl_client_writable = self.__start_tor_client(client_tor_ctl_port, client_tor_socks_port)
+ tor_writable, torctl_client_writable = self.__start_tor_client(tgen_client_conf.tor_ctl_port, tgen_client_conf.tor_socks_port)
general_writables.append(tor_writable)
server_urls = []
if do_onion and self.hs_v3_service_id is not None:
- server_urls.append("{0}.onion:{1}".format(self.hs_v3_service_id, client_tgen_connect_port))
+ server_urls.append("{0}.onion:{1}".format(self.hs_v3_service_id, tgen_client_conf.connect_port))
if do_inet:
- connect_ip = client_tgen_connect_ip if client_tgen_connect_ip != '0.0.0.0' else util.get_ip_address()
- server_urls.append("{0}:{1}".format(connect_ip, client_tgen_connect_port))
+ connect_ip = tgen_client_conf.connect_ip if tgen_client_conf.connect_ip != '0.0.0.0' else util.get_ip_address()
+ server_urls.append("{0}:{1}".format(connect_ip, tgen_client_conf.connect_port))
+ tgen_model.servers = server_urls
if do_onion or do_inet:
assert len(server_urls) > 0
- tgen_client_writable = self.__start_tgen_client(server_urls, client_tgen_listen_port, client_tor_socks_port)
+ tgen_model.port = tgen_client_conf.listen_port
+ tgen_model.socks_port = tgen_client_conf.tor_socks_port
+ tgen_client_writable = self.__start_tgen_client(tgen_model)
self.__start_log_processors(general_writables, tgen_client_writable, torctl_client_writable)
logging.info("Bootstrapping finished, entering heartbeat loop")
time.sleep(1)
- if self.oneshot:
- logging.info("Onionperf is running in Oneshot mode. It will download a 5M file and shut down gracefully...")
while True:
- # TODO add status update of some kind? maybe the number of files in the www directory?
- # logging.info("Heartbeat: {0} downloads have completed successfully".format(self.__get_download_count(tgen_client_writable.filename)))
- if self.oneshot:
+ if tgen_model.num_transfers:
downloads = 0
while True:
downloads = self.__get_download_count(tgen_client_writable.filename)
- if downloads >= 1:
- logging.info("Onionperf has downloaded a 5M file in oneshot mode, and will now shut down.")
- break
+ time.sleep(1)
+ if downloads >= tgen_model.num_transfers:
+ logging.info("Onionperf has downloaded %d files and will now shut down." % tgen_model.num_transfers)
+ break
else:
continue
break
@@ -320,35 +340,25 @@ class Measurement(object):
logrotate.start()
self.threads.append(logrotate)
- def __start_tgen_client(self, server_urls, tgen_port, socks_port):
- return self.__start_tgen("client", tgen_port, socks_port, server_urls)
+ def __start_tgen_client(self, tgen_model_conf):
+ return self.__start_tgen("client", tgen_model_conf)
- def __start_tgen_server(self, tgen_port):
- return self.__start_tgen("server", tgen_port)
+ def __start_tgen_server(self, tgen_model_conf):
+ return self.__start_tgen("server", tgen_model_conf)
- def __start_tgen(self, name, tgen_port, socks_port=None, server_urls=None):
- logging.info("Starting TGen {0} process on port {1}...".format(name, tgen_port))
+ def __start_tgen(self, name, tgen_model_conf):
+ logging.info("Starting TGen {0} process on port {1}...".format(name, tgen_model_conf.port))
tgen_datadir = "{0}/tgen-{1}".format(self.datadir_path, name)
if not os.path.exists(tgen_datadir): os.makedirs(tgen_datadir)
tgen_confpath = "{0}/tgen.graphml.xml".format(tgen_datadir)
if os.path.exists(tgen_confpath): os.remove(tgen_confpath)
- if socks_port is None:
- model.ListenModel(tgen_port="{0}".format(tgen_port)).dump_to_file(tgen_confpath)
- logging.info("TGen server running at 0.0.0.0:{0}".format(tgen_port))
+ if tgen_model_conf.socks_port is None:
+ model.ListenModel(tgen_port="{0}".format(tgen_model_conf.port)).dump_to_file(tgen_confpath)
+ logging.info("TGen server running at 0.0.0.0:{0}".format(tgen_model_conf.port))
else:
-
- tgen_model_args = {
- "tgen_port": "{0}".format(tgen_port),
- "tgen_servers": server_urls,
- "socksproxy": "127.0.0.1:{0}".format(socks_port)
- }
- if self.oneshot:
- tgen_model = model.OneshotModel(**tgen_model_args)
- else:
- tgen_model = model.TorperfModel(**tgen_model_args)
-
+ tgen_model = model.TorperfModel(tgen_model_conf)
tgen_model.dump_to_file(tgen_confpath)
tgen_logpath = "{0}/onionperf.tgen.log".format(tgen_datadir)
diff --git a/onionperf/model.py b/onionperf/model.py
index cb45f51..a4af2fc 100644
--- a/onionperf/model.py
+++ b/onionperf/model.py
@@ -41,6 +41,21 @@ class TGenLoadableModel(TGenModel):
model_instance = cls(graph)
return model_instance
+class TGenModelConf(object):
+ """Represents a TGen traffic model configuration."""
+ def __init__(self, initial_pause=0, num_transfers=1, transfer_size="5 MiB",
+ continuous_transfers=False, inter_transfer_pause=5, port=None, servers=[],
+ socks_port=None):
+ self.initial_pause = initial_pause
+ self.num_transfers = num_transfers
+ self.transfer_size = transfer_size
+ self.continuous_transfers = continuous_transfers
+ self.inter_transfer_pause = inter_transfer_pause
+ self.port = port
+ self.servers = servers
+ self.socks_port = socks_port
+
+
class GeneratableTGenModel(TGenModel, metaclass=ABCMeta):
@abstractmethod
@@ -58,61 +73,74 @@ class ListenModel(GeneratableTGenModel):
g.add_node("start", serverport=self.tgen_port, loglevel="info", heartbeat="1 minute")
return g
+
class TorperfModel(GeneratableTGenModel):
- def __init__(self, tgen_port="8889", tgen_servers=["127.0.0.1:8888"], socksproxy=None):
- self.tgen_port = tgen_port
- self.tgen_servers = tgen_servers
- self.socksproxy = socksproxy
+ def __init__(self, config):
+ self.config = config
self.graph = self.generate()
def generate(self):
- server_str = ','.join(self.tgen_servers)
+ server_str = ','.join(self.config.servers)
g = DiGraph()
- if self.socksproxy is not None:
- g.add_node("start", serverport=self.tgen_port, peers=server_str, loglevel="info", heartbeat="1 minute", socksproxy=self.socksproxy)
+ if self.config.socks_port is not None:
+ g.add_node("start",
+ serverport=self.config.port,
+ peers=server_str,
+ loglevel="info",
+ heartbeat="1 minute",
+ socksproxy="127.0.0.1:{0}".format(self.config.socks_port))
else:
- g.add_node("start", serverport=self.tgen_port, peers=server_str, loglevel="info", heartbeat="1 minute")
- g.add_node("pause", time="5 minutes")
- g.add_node("stream5m", sendsize="0", recvsize="5 mib", timeout="270 seconds", stallout="0 seconds")
+ g.add_node("start",
+ serverport=self.config.port,
+ peers=server_str,
+ loglevel="info",
+ heartbeat="1 minute")
+ g.add_node("pause", time="%d seconds" % self.config.initial_pause)
g.add_edge("start", "pause")
- # after the pause, we start another pause timer while *at the same time* choosing one of
- # the file sizes and downloading it from one of the servers in the server pool
- g.add_edge("pause", "pause")
-
- # these are chosen with weighted probability, change edge 'weight' attributes to adjust probability
- g.add_edge("pause", "stream5m")
-
- return g
-
-class OneshotModel(GeneratableTGenModel):
-
- def __init__(self, tgen_port="8889", tgen_servers=["127.0.0.1:8888"], socksproxy=None):
- self.tgen_port = tgen_port
- self.tgen_servers = tgen_servers
- self.socksproxy = socksproxy
- self.graph = self.generate()
-
- def generate(self):
- server_str = ','.join(self.tgen_servers)
- g = DiGraph()
-
- if self.socksproxy is not None:
- g.add_node("start", serverport=self.tgen_port, peers=server_str, loglevel="info", heartbeat="1 minute", socksproxy=self.socksproxy)
- else:
- g.add_node("start", serverport=self.tgen_port, peers=server_str, loglevel="info", heartbeat="1 minute")
- g.add_node("stream5m", sendsize="0", recvsize="5 mib", timeout="270 seconds", stallout="0 seconds")
-
- g.add_edge("start", "stream5m")
- g.add_edge("stream5m", "start")
+ # "One-shot mode," i.e., onionperf will stop after the given number of
+ # iterations. The idea is:
+ # start -> pause -> stream-1 -> pause-1 -> ... -> stream-n -> pause-n -> end
+ if self.config.num_transfers > 0:
+ for i in range(self.config.num_transfers):
+ g.add_node("stream-%d" % i,
+ sendsize="0",
+ recvsize=self.config.transfer_size,
+ timeout="15 seconds",
+ stallout="10 seconds")
+ g.add_node("pause-%d" % i,
+ time="%d seconds" % self.config.inter_transfer_pause)
+
+ g.add_edge("stream-%d" % i, "pause-%d" % i)
+ if i > 0:
+ g.add_edge("pause-%d" % (i-1), "stream-%d" % i)
+
+ g.add_node("end")
+ g.add_edge("pause", "stream-0")
+ g.add_edge("pause-%d" % (self.config.num_transfers - 1), "end")
+
+ # Continuous mode, i.e., onionperf will not stop. The idea is:
+ # start -> pause -> stream -> pause
+ # ^ |
+ # +-------+
+ elif self.config.continuous_transfers:
+ g.add_node("stream",
+ sendsize="0",
+ recvsize=self.config.transfer_size,
+ timeout="15 seconds",
+ stallout="10 seconds")
+ g.add_node("pause",
+ time="%d seconds" % self.config.inter_transfer_pause)
+ g.add_edge("pause", "stream")
+ g.add_edge("stream", "pause")
+ g.add_edge("pause", "stream")
return g
-
def dump_example_tgen_torperf_model(domain_name, onion_name):
# the server listens on 8888, the client uses Tor to come back directly, and using a hidden serv
server = ListenModel(tgen_port="8888")
diff --git a/onionperf/onionperf b/onionperf/onionperf
index e8024ce..d95e691 100755
--- a/onionperf/onionperf
+++ b/onionperf/onionperf
@@ -154,11 +154,6 @@ def main():
action="store", dest="tgenpath",
default=util.which("tgen"))
- measure_parser.add_argument('--oneshot',
- help="""Enables oneshot mode, onionperf closes on successfully downloading a file""",
- action="store_true", dest="oneshot",
- default=False)
-
measure_parser.add_argument('--additional-client-conf',
help="""Additional configuration lines for the Tor client, for example bridge lines""",
metavar="CONFIG", type=str,
@@ -195,6 +190,30 @@ def main():
action="store", dest="tgenconnectport",
default=8080)
+ measure_parser.add_argument('--tgen-start-pause',
+ help="""the number of seconds TGen should wait before walking through its action graph""",
+ metavar="N", type=int,
+ action="store", dest="tgenstartpause",
+ default=5)
+
+ measure_parser.add_argument('--tgen-intertransfer-pause',
+ help="""the number of seconds TGen should wait in between two transfers""",
+ metavar="N", type=int,
+ action="store", dest="tgenintertransferpause",
+ default=300)
+
+ measure_parser.add_argument('--tgen-transfer-size',
+ help="""the size of the file transfer that TGen will perform (e.g., '5 MiB' or '10 KiB')""",
+ metavar="STRING", type=str,
+ action="store", dest="tgentransfersize",
+ default="5 MiB")
+
+ measure_parser.add_argument('--tgen-num-transfers',
+ help="""the number of file transfers that TGen will perform""",
+ metavar="N", type=int,
+ action="store", dest="tgennumtransfers",
+ default=0)
+
onion_or_inet_only_group = measure_parser.add_mutually_exclusive_group()
onion_or_inet_only_group.add_argument('-o', '--onion-only',
@@ -327,7 +346,8 @@ def monitor(args):
writer.close()
def measure(args):
- from onionperf.measurement import Measurement
+ from onionperf.measurement import Measurement, TGenConf
+ from onionperf.model import TGenModelConf
# check paths
args.torpath = util.find_path(args.torpath, "tor")
@@ -347,12 +367,27 @@ def measure(args):
server_tor_ctl_port = util.get_random_free_port()
server_tor_socks_port = util.get_random_free_port()
+ tgen_client_conf = TGenConf(listen_port=client_tgen_port,
+ connect_ip=client_connect_ip,
+ connect_port=client_connect_port,
+ tor_ctl_port=client_tor_ctl_port,
+ tor_socks_port=client_tor_socks_port)
+
+ tgen_server_conf = TGenConf(listen_port=server_tgen_port,
+ tor_ctl_port=server_tor_ctl_port,
+ tor_socks_port=server_tor_socks_port)
+
+ tgen_model = TGenModelConf(initial_pause=args.tgenstartpause,
+ transfer_size=args.tgentransfersize,
+ num_transfers=args.tgennumtransfers,
+ continuous_transfers=args.tgennumtransfers == 0,
+ inter_transfer_pause=args.tgenintertransferpause)
+
meas = Measurement(args.torpath,
args.tgenpath,
args.prefix,
args.private_prefix,
args.nickname,
- args.oneshot,
args.additional_client_conf,
args.torclient_conf_file,
args.torserver_conf_file,
@@ -360,14 +395,9 @@ def measure(args):
meas.run(do_onion=not args.inet_only,
do_inet=not args.onion_only,
- client_tgen_listen_port=client_tgen_port,
- client_tgen_connect_ip=client_connect_ip,
- client_tgen_connect_port=client_connect_port,
- client_tor_ctl_port=client_tor_ctl_port,
- client_tor_socks_port=client_tor_socks_port,
- server_tgen_listen_port=server_tgen_port,
- server_tor_ctl_port=server_tor_ctl_port,
- server_tor_socks_port=server_tor_socks_port)
+ tgen_model=tgen_model,
+ tgen_client_conf=tgen_client_conf,
+ tgen_server_conf=tgen_server_conf)
else:
logging.info("Please fix path errors to continue")
diff --git a/onionperf/tests/test_measurement.py b/onionperf/tests/test_measurement.py
index e5010fa..6bca8ab 100644
--- a/onionperf/tests/test_measurement.py
+++ b/onionperf/tests/test_measurement.py
@@ -57,8 +57,8 @@ WarnUnsafeSocks 0\nSafeLogging 0\nMaxCircuitDirtiness 60 seconds\nDataDirectory
known_config_server = "RunAsDaemon 0\nORPort 0\nDirPort 0\nControlPort 9001\nSocksPort 9050\nSocksListenAddress 127.0.0.1\nClientOnly 1\n\
WarnUnsafeSocks 0\nSafeLogging 0\nMaxCircuitDirtiness 60 seconds\nDataDirectory /tmp/\nDataDirectoryGroupReadable 1\nLog INFO stdout\nUseEntryGuards 0\n"
- meas = measurement.Measurement(None, None, None, None, None, None,
- "UseBridges 1\n", None, None)
+ meas = measurement.Measurement(None, None, None, None, None,
+ "UseBridges 1\n", None, None, False)
config_client = meas.create_tor_config(9001, 9050, "/tmp/", "client")
config_server = meas.create_tor_config(9001, 9050, "/tmp/", "server")
assert_equals(config_client, known_config)
@@ -80,8 +80,8 @@ WarnUnsafeSocks 0\nSafeLogging 0\nMaxCircuitDirtiness 60 seconds\nDataDirectory
known_config = "RunAsDaemon 0\nORPort 0\nDirPort 0\nControlPort 9001\nSocksPort 9050\nSocksListenAddress 127.0.0.1\nClientOnly 1\n\
WarnUnsafeSocks 0\nSafeLogging 0\nMaxCircuitDirtiness 60 seconds\nDataDirectory /tmp/\nDataDirectoryGroupReadable 1\nLog INFO stdout\nUseBridges 1\n"
- meas = measurement.Measurement(None, None, None, None, None, None, None,
- absolute_data_path("config"), None)
+ meas = measurement.Measurement(None, None, None, None, None, None,
+ absolute_data_path("config"), None, False)
config_client = meas.create_tor_config(9001, 9050, "/tmp/", "client")
config_server = meas.create_tor_config(9001, 9050, "/tmp/", "server")
assert_equals(config_client, known_config)
@@ -103,8 +103,8 @@ WarnUnsafeSocks 0\nSafeLogging 0\nMaxCircuitDirtiness 60 seconds\nDataDirectory
known_config = "RunAsDaemon 0\nORPort 0\nDirPort 0\nControlPort 9001\nSocksPort 9050\nSocksListenAddress 127.0.0.1\nClientOnly 1\n\
WarnUnsafeSocks 0\nSafeLogging 0\nMaxCircuitDirtiness 60 seconds\nDataDirectory /tmp/\nDataDirectoryGroupReadable 1\nLog INFO stdout\nUseEntryGuards 0\n"
- meas = measurement.Measurement(None, None, None, None, None, None, None, None,
- absolute_data_path("config"))
+ meas = measurement.Measurement(None, None, None, None, None, None, None,
+ absolute_data_path("config"), False)
config_client = meas.create_tor_config(9001, 9050, "/tmp/", "client")
config_server = meas.create_tor_config(9001, 9050, "/tmp/", "server")
assert_equals(config_client, known_config)
1
0

01 Sep '20
commit a6aa4189e04ee1b05dbdb68b90f24e14bf3443ac
Author: Philipp Winter <phw(a)torproject.org>
Date: Fri Aug 14 16:47:04 2020 +0000
Apply 1 suggestion(s) to 1 file(s)
---
onionperf/model.py | 54 ++++++++++++++++++++++--------------------------------
1 file changed, 22 insertions(+), 32 deletions(-)
diff --git a/onionperf/model.py b/onionperf/model.py
index bdd5a53..b589249 100644
--- a/onionperf/model.py
+++ b/onionperf/model.py
@@ -104,40 +104,30 @@ class TorperfModel(GeneratableTGenModel):
# "One-shot mode," i.e., onionperf will stop after the given number of
# iterations. The idea is:
# start -> pause -> stream-1 -> pause-1 -> ... -> stream-n -> pause-n -> end
- if self.config.num_transfers > 0:
- for i in range(self.config.num_transfers):
- g.add_node("stream-%d" % i,
- sendsize="0",
- recvsize=self.config.transfer_size,
- timeout="15 seconds",
- stallout="10 seconds")
- g.add_node("pause-%d" % i,
- time="%d seconds" % self.config.inter_transfer_pause)
-
- g.add_edge("stream-%d" % i, "pause-%d" % i)
- if i > 0:
- g.add_edge("pause-%d" % (i-1), "stream-%d" % i)
-
+ g.add_node("stream",
+ sendsize="0",
+ recvsize=self.config.transfer_size,
+ timeout="15 seconds",
+ stallout="10 seconds")
+ g.add_node("pause_between",
+ time="%d seconds" % self.config.inter_transfer_pause)
+
+ g.add_edge("pause_initial", "stream")
+
+ # only add an end node if we need to stop
+ if self.config.continuous_transfers:
+ # continuous mode, i.e., no end node
+ g.add_edge("stream", "pause_between")
+ else:
+ # one-shot mode, i.e., end after configured number of transfers
g.add_node("end",
count=str(self.config.num_transfers))
- g.add_edge("pause", "stream-0")
- g.add_edge("pause-%d" % (self.config.num_transfers - 1), "end")
-
- # Continuous mode, i.e., onionperf will not stop. The idea is:
- # start -> pause -> stream -> pause
- # ^ |
- # +-------+
- elif self.config.continuous_transfers:
- g.add_node("stream",
- sendsize="0",
- recvsize=self.config.transfer_size,
- timeout="15 seconds",
- stallout="10 seconds")
- g.add_node("pause",
- time="%d seconds" % self.config.inter_transfer_pause)
- g.add_edge("pause", "stream")
- g.add_edge("stream", "pause")
- g.add_edge("pause", "stream")
+ # check for end condition after every transfer
+ g.add_edge("stream", "end")
+ # if end condition not met, pause
+ g.add_edge("end", "pause_between")
+
+ g.add_edge("pause_between", "stream")
return g
1
0
commit e9fd47d95db102b3a7ace36fa412e18d182c5fa4
Author: Karsten Loesing <karsten.loesing(a)gmx.net>
Date: Tue Jun 16 21:30:07 2020 +0200
Measure static guard nodes.
Add --drop-guards parameter to use and drop guards after a given
number of hours.
Implements #33399.
---
onionperf/measurement.py | 7 ++++---
onionperf/monitor.py | 18 +++++++++++++-----
onionperf/onionperf | 9 ++++++++-
3 files changed, 25 insertions(+), 9 deletions(-)
diff --git a/onionperf/measurement.py b/onionperf/measurement.py
index 4a58bc4..899b277 100644
--- a/onionperf/measurement.py
+++ b/onionperf/measurement.py
@@ -172,7 +172,7 @@ def logrotate_thread_task(writables, tgen_writable, torctl_writable, docroot, ni
class Measurement(object):
- def __init__(self, tor_bin_path, tgen_bin_path, datadir_path, privatedir_path, nickname, oneshot, additional_client_conf=None, torclient_conf_file=None, torserver_conf_file=None, single_onion=False):
+ def __init__(self, tor_bin_path, tgen_bin_path, datadir_path, privatedir_path, nickname, oneshot, additional_client_conf=None, torclient_conf_file=None, torserver_conf_file=None, single_onion=False, drop_guards_interval_hours=None):
self.tor_bin_path = tor_bin_path
self.tgen_bin_path = tgen_bin_path
self.datadir_path = datadir_path
@@ -188,6 +188,7 @@ class Measurement(object):
self.torclient_conf_file = torclient_conf_file
self.torserver_conf_file = torserver_conf_file
self.single_onion = single_onion
+ self.drop_guards_interval_hours = drop_guards_interval_hours
def run(self, do_onion=True, do_inet=True, client_tgen_listen_port=58888, client_tgen_connect_ip='0.0.0.0', client_tgen_connect_port=8080, client_tor_ctl_port=59050, client_tor_socks_port=59000,
server_tgen_listen_port=8080, server_tor_ctl_port=59051, server_tor_socks_port=59001):
@@ -388,7 +389,7 @@ WarnUnsafeSocks 0\nSafeLogging 0\nMaxCircuitDirtiness 60 seconds\nDataDirectory
tor_config = tor_config + f.read()
if name == "client" and self.additional_client_conf:
tor_config += self.additional_client_conf
- if not 'UseEntryGuards' in tor_config and not 'UseBridges' in tor_config:
+ if not 'UseEntryGuards' in tor_config and not 'UseBridges' in tor_config and self.drop_guards_interval_hours == 0:
tor_config += "UseEntryGuards 0\n"
if name == "server" and self.single_onion:
tor_config += "HiddenServiceSingleHopMode 1\nHiddenServiceNonAnonymousMode 1\n"
@@ -467,7 +468,7 @@ WarnUnsafeSocks 0\nSafeLogging 0\nMaxCircuitDirtiness 60 seconds\nDataDirectory
torctl_events = [e for e in monitor.get_supported_torctl_events() if e not in ['DEBUG', 'INFO', 'NOTICE', 'WARN', 'ERR']]
newnym_interval_seconds = 300
- torctl_args = (control_port, torctl_writable, torctl_events, newnym_interval_seconds, self.done_event)
+ torctl_args = (control_port, torctl_writable, torctl_events, newnym_interval_seconds, self.drop_guards_interval_hours, self.done_event)
torctl_helper = threading.Thread(target=monitor.tor_monitor_run, name="torctl_{0}_helper".format(name), args=torctl_args)
torctl_helper.start()
self.threads.append(torctl_helper)
diff --git a/onionperf/monitor.py b/onionperf/monitor.py
index 5387bff..ac6fea9 100644
--- a/onionperf/monitor.py
+++ b/onionperf/monitor.py
@@ -22,7 +22,7 @@ class TorMonitor(object):
self.writable = writable
self.events = events
- def run(self, newnym_interval_seconds=None, done_ev=None):
+ def run(self, newnym_interval_seconds=None, drop_guards_interval_hours=0, done_ev=None):
with Controller.from_port(port=self.tor_ctl_port) as torctl:
torctl.authenticate()
@@ -54,6 +54,10 @@ class TorMonitor(object):
# let stem run its threads and log all of the events, until user interrupts
try:
interval_count = 0
+ if newnym_interval_seconds is not None:
+ next_newnym = newnym_interval_seconds
+ if drop_guards_interval_hours > 0:
+ next_drop_guards = drop_guards_interval_hours * 3600
while done_ev is None or not done_ev.is_set():
# if self.filepath != '-' and os.path.exists(self.filepath):
# with open(self.filepath, 'rb') as sizef:
@@ -61,9 +65,13 @@ class TorMonitor(object):
# logging.info(msg)
sleep(1)
interval_count += 1
- if newnym_interval_seconds is not None and interval_count >= newnym_interval_seconds:
- interval_count = 0
+ if newnym_interval_seconds is not None and interval_count >= next_newnym:
+ next_newnym += newnym_interval_seconds
torctl.signal(Signal.NEWNYM)
+ if drop_guards_interval_hours > 0 and interval_count >= next_drop_guards:
+ next_drop_guards += drop_guards_interval_hours * 3600
+ torctl.drop_guards()
+
except KeyboardInterrupt:
pass # the user hit ctrl+c
@@ -79,6 +87,6 @@ class TorMonitor(object):
unix_ts = (utcnow - epoch).total_seconds()
writable.write("{0} {1:.02f} {2}".format(now.strftime("%Y-%m-%d %H:%M:%S"), unix_ts, msg))
-def tor_monitor_run(tor_ctl_port, writable, events, newnym_interval_seconds, done_ev):
+def tor_monitor_run(tor_ctl_port, writable, events, newnym_interval_seconds, drop_guards_interval_hours, done_ev):
torctl_monitor = TorMonitor(tor_ctl_port, writable, events)
- torctl_monitor.run(newnym_interval_seconds=newnym_interval_seconds, done_ev=done_ev)
+ torctl_monitor.run(newnym_interval_seconds=newnym_interval_seconds, drop_guards_interval_hours=drop_guards_interval_hours, done_ev=done_ev)
diff --git a/onionperf/onionperf b/onionperf/onionperf
index a7d32f6..52a779f 100755
--- a/onionperf/onionperf
+++ b/onionperf/onionperf
@@ -194,6 +194,12 @@ def main():
action="store", dest="tgenconnectport",
default=8080)
+ measure_parser.add_argument('--drop-guards',
+ help="""Use and drop guards every N > 0 hours, or do not use guards at all if N = 0""",
+ metavar="N", type=type_nonnegative_integer,
+ action="store", dest="drop_guards_interval_hours",
+ default=0)
+
onion_or_inet_only_group = measure_parser.add_mutually_exclusive_group()
onion_or_inet_only_group.add_argument('-o', '--onion-only',
@@ -360,7 +366,8 @@ def measure(args):
args.additional_client_conf,
args.torclient_conf_file,
args.torserver_conf_file,
- args.single_onion)
+ args.single_onion,
+ args.drop_guards_interval_hours)
meas.run(do_onion=not args.inet_only,
do_inet=not args.onion_only,
1
0

[onionperf/master] Let TGen client finish by itself in one-shot mode.
by karsten@torproject.org 01 Sep '20
by karsten@torproject.org 01 Sep '20
01 Sep '20
commit 959cf3689106189001a83c7e58dc40e10497a081
Author: Philipp Winter <phw(a)nymity.ch>
Date: Fri Aug 7 14:48:58 2020 -0700
Let TGen client finish by itself in one-shot mode.
We tell TGen client to finish on its own by passing the count option to
the end node:
https://github.com/shadow/tgen/blob/master/doc/TGen-Options.md#end-options
This patch adds another argument to the function watchdog_thread_task(),
no_relaunch, which instructs the function to not re-launch its process
if it fails.
---
onionperf/measurement.py | 45 +++++++++++++++++++++++----------------------
onionperf/model.py | 3 ++-
2 files changed, 25 insertions(+), 23 deletions(-)
diff --git a/onionperf/measurement.py b/onionperf/measurement.py
index e2d8d1c..d699292 100644
--- a/onionperf/measurement.py
+++ b/onionperf/measurement.py
@@ -50,10 +50,11 @@ def readline_thread_task(instream, q):
# wait for lines from stdout until the EOF
for line in iter(instream.readline, b''): q.put(line)
-def watchdog_thread_task(cmd, cwd, writable, done_ev, send_stdin, ready_search_str, ready_ev):
+def watchdog_thread_task(cmd, cwd, writable, done_ev, send_stdin, ready_search_str, ready_ev, no_relaunch):
- # launch or re-launch our sub process until we are told to stop
- # if we fail too many times in too short of time, give up and exit
+ # launch or re-launch (or don't re-launch, if no_relaunch is set) our sub
+ # process until we are told to stop if we fail too many times in too short
+ # of time, give up and exit
failure_times = []
pause_time_seconds = 0
while done_ev.is_set() is False:
@@ -105,6 +106,10 @@ def watchdog_thread_task(cmd, cwd, writable, done_ev, send_stdin, ready_search_s
subp.wait()
elif done_ev.is_set():
logging.info("command '{}' finished as expected".format(cmd))
+ elif no_relaunch:
+ logging.info("command '{}' finished on its own".format(cmd))
+ # our command finished on its own. time to terminate.
+ done_ev.set()
else:
logging.warning("command '{}' finished before expected".format(cmd))
now = time.time()
@@ -284,15 +289,9 @@ class Measurement(object):
time.sleep(1)
while True:
if tgen_model.num_transfers:
- downloads = 0
- while True:
- downloads = self.__get_download_count(tgen_client_writable.filename)
- time.sleep(1)
- if downloads >= tgen_model.num_transfers:
- logging.info("Onionperf has downloaded %d files and will now shut down." % tgen_model.num_transfers)
- break
- else:
- continue
+ # This function blocks until our TGen client process
+ # terminated on its own.
+ self.__wait_for_tgen_client()
break
if self.__is_alive():
@@ -366,7 +365,10 @@ class Measurement(object):
logging.info("Logging TGen {1} process output to {0}".format(tgen_logpath, name))
tgen_cmd = "{0} {1}".format(self.tgen_bin_path, tgen_confpath)
- tgen_args = (tgen_cmd, tgen_datadir, tgen_writable, self.done_event, None, None, None)
+ # If we're running in "one-shot mode", TGen client will terminate on
+ # its own and we don't need our watchdog to restart the process.
+ no_relaunch = (name == "client" and tgen_model_conf.num_transfers)
+ tgen_args = (tgen_cmd, tgen_datadir, tgen_writable, self.done_event, None, None, None, no_relaunch)
tgen_watchdog = threading.Thread(target=watchdog_thread_task, name="tgen_{0}_watchdog".format(name), args=tgen_args)
tgen_watchdog.start()
self.threads.append(tgen_watchdog)
@@ -464,7 +466,7 @@ WarnUnsafeSocks 0\nSafeLogging 0\nMaxCircuitDirtiness 60 seconds\nDataDirectory
tor_stdin_bytes = str_tools._to_bytes(tor_config)
tor_ready_str = "Bootstrapped 100"
tor_ready_ev = threading.Event()
- tor_args = (tor_cmd, tor_datadir, tor_writable, self.done_event, tor_stdin_bytes, tor_ready_str, tor_ready_ev)
+ tor_args = (tor_cmd, tor_datadir, tor_writable, self.done_event, tor_stdin_bytes, tor_ready_str, tor_ready_ev, False)
tor_watchdog = threading.Thread(target=watchdog_thread_task, name="tor_{0}_watchdog".format(name), args=tor_args)
tor_watchdog.start()
self.threads.append(tor_watchdog)
@@ -491,14 +493,13 @@ WarnUnsafeSocks 0\nSafeLogging 0\nMaxCircuitDirtiness 60 seconds\nDataDirectory
return tor_writable, torctl_writable
- def __get_download_count(self, tgen_logpath):
- count = 0
- if tgen_logpath is not None and os.path.exists(tgen_logpath):
- with open(tgen_logpath, 'r') as fin:
- for line in fin:
- if re.search("transfer-complete", line) is not None:
- count += 1
- return count
+ def __wait_for_tgen_client(self):
+ logging.info("Waiting for TGen client to finish.")
+ for t in self.threads:
+ if t.getName() == "tgen_client_watchdog":
+ while t.is_alive():
+ time.sleep(1)
+ logging.info("TGen client finished.")
def __is_alive(self):
all_alive = True
diff --git a/onionperf/model.py b/onionperf/model.py
index a4af2fc..bdd5a53 100644
--- a/onionperf/model.py
+++ b/onionperf/model.py
@@ -118,7 +118,8 @@ class TorperfModel(GeneratableTGenModel):
if i > 0:
g.add_edge("pause-%d" % (i-1), "stream-%d" % i)
- g.add_node("end")
+ g.add_node("end",
+ count=str(self.config.num_transfers))
g.add_edge("pause", "stream-0")
g.add_edge("pause-%d" % (self.config.num_transfers - 1), "end")
1
0

[onionperf/master] Rename variables for consistency and clarity.
by karsten@torproject.org 01 Sep '20
by karsten@torproject.org 01 Sep '20
01 Sep '20
commit 86f746c3c6ee6dc3be342f53fea90e6c02d39991
Author: Philipp Winter <phw(a)nymity.ch>
Date: Fri Aug 14 10:49:09 2020 -0700
Rename variables for consistency and clarity.
initial_pause -> pause_initial
inter_transfer_pause -> pause_between
Thanks to Rob for the suggestion.
---
onionperf/model.py | 19 ++++++++-----------
onionperf/onionperf | 12 ++++++------
2 files changed, 14 insertions(+), 17 deletions(-)
diff --git a/onionperf/model.py b/onionperf/model.py
index b589249..3bfe35f 100644
--- a/onionperf/model.py
+++ b/onionperf/model.py
@@ -43,14 +43,14 @@ class TGenLoadableModel(TGenModel):
class TGenModelConf(object):
"""Represents a TGen traffic model configuration."""
- def __init__(self, initial_pause=0, num_transfers=1, transfer_size="5 MiB",
- continuous_transfers=False, inter_transfer_pause=5, port=None, servers=[],
+ def __init__(self, pause_initial=0, num_transfers=1, transfer_size="5 MiB",
+ continuous_transfers=False, pause_between=5, port=None, servers=[],
socks_port=None):
- self.initial_pause = initial_pause
+ self.pause_initial = pause_initial
+ self.pause_between = pause_between
self.num_transfers = num_transfers
self.transfer_size = transfer_size
self.continuous_transfers = continuous_transfers
- self.inter_transfer_pause = inter_transfer_pause
self.port = port
self.servers = servers
self.socks_port = socks_port
@@ -98,20 +98,17 @@ class TorperfModel(GeneratableTGenModel):
loglevel="info",
heartbeat="1 minute")
- g.add_node("pause", time="%d seconds" % self.config.initial_pause)
- g.add_edge("start", "pause")
-
- # "One-shot mode," i.e., onionperf will stop after the given number of
- # iterations. The idea is:
- # start -> pause -> stream-1 -> pause-1 -> ... -> stream-n -> pause-n -> end
+ g.add_node("pause_initial",
+ time="%d seconds" % self.config.pause_initial)
g.add_node("stream",
sendsize="0",
recvsize=self.config.transfer_size,
timeout="15 seconds",
stallout="10 seconds")
g.add_node("pause_between",
- time="%d seconds" % self.config.inter_transfer_pause)
+ time="%d seconds" % self.config.pause_between)
+ g.add_edge("start", "pause_initial")
g.add_edge("pause_initial", "stream")
# only add an end node if we need to stop
diff --git a/onionperf/onionperf b/onionperf/onionperf
index d95e691..a49982b 100755
--- a/onionperf/onionperf
+++ b/onionperf/onionperf
@@ -190,16 +190,16 @@ def main():
action="store", dest="tgenconnectport",
default=8080)
- measure_parser.add_argument('--tgen-start-pause',
+ measure_parser.add_argument('--tgen-pause-initial',
help="""the number of seconds TGen should wait before walking through its action graph""",
metavar="N", type=int,
- action="store", dest="tgenstartpause",
+ action="store", dest="tgenpauseinitial",
default=5)
- measure_parser.add_argument('--tgen-intertransfer-pause',
+ measure_parser.add_argument('--tgen-pause-between',
help="""the number of seconds TGen should wait in between two transfers""",
metavar="N", type=int,
- action="store", dest="tgenintertransferpause",
+ action="store", dest="tgenpausebetween",
default=300)
measure_parser.add_argument('--tgen-transfer-size',
@@ -377,11 +377,11 @@ def measure(args):
tor_ctl_port=server_tor_ctl_port,
tor_socks_port=server_tor_socks_port)
- tgen_model = TGenModelConf(initial_pause=args.tgenstartpause,
+ tgen_model = TGenModelConf(pause_initial=args.tgenpauseinitial,
transfer_size=args.tgentransfersize,
num_transfers=args.tgennumtransfers,
continuous_transfers=args.tgennumtransfers == 0,
- inter_transfer_pause=args.tgenintertransferpause)
+ pause_between=args.tgenpausebetween)
meas = Measurement(args.torpath,
args.tgenpath,
1
0

01 Sep '20
commit b8f1e5c2695c097a7494f7975403664c0c833825
Author: Karsten Loesing <karsten.loesing(a)gmx.net>
Date: Sun Aug 16 22:03:34 2020 +0200
Make some tweaks to new TGen model.
- Change timeout back to 270 seconds and stallout back to 0 seconds.
- Change initial pause to 300 seconds to keep default behavior
unchanged.
- Change model, so that pause_between starts in parallel to a stream,
not when the stream is completed. This is the same behavior as
before.
Also add a change log entry for all changes.
---
CHANGELOG.md | 7 +++++++
onionperf/model.py | 20 ++++++++------------
onionperf/onionperf | 2 +-
3 files changed, 16 insertions(+), 13 deletions(-)
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 0c4c4f2..ac6897b 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,3 +1,10 @@
+# Changes in version 0.7 - 2020-??-??
+
+ - Remove the `onionperf measure --oneshot` switch and replace it with
+ new switches `--tgen-pause-initial`, `--tgen-pause-between`,
+ `--tgen-transfer-size`, and `--tgen-num-transfers ` to further
+ configure the generated TGen model.
+
# Changes in version 0.6 - 2020-??-??
- Update to TGen 1.0.0, use TGenTools for parsing TGen log files, and
diff --git a/onionperf/model.py b/onionperf/model.py
index d45763e..fde587f 100644
--- a/onionperf/model.py
+++ b/onionperf/model.py
@@ -43,8 +43,8 @@ class TGenLoadableModel(TGenModel):
class TGenModelConf(object):
"""Represents a TGen traffic model configuration."""
- def __init__(self, pause_initial=0, num_transfers=1, transfer_size="5 MiB",
- continuous_transfers=False, pause_between=5, port=None, servers=[],
+ def __init__(self, pause_initial=300, num_transfers=1, transfer_size="5 MiB",
+ continuous_transfers=False, pause_between=300, port=None, servers=[],
socks_port=None):
self.pause_initial = pause_initial
self.pause_between = pause_between
@@ -103,28 +103,24 @@ class TorperfModel(GeneratableTGenModel):
g.add_node("stream",
sendsize="0",
recvsize=self.config.transfer_size,
- timeout="15 seconds",
- stallout="10 seconds")
+ timeout="270 seconds",
+ stallout="0 seconds")
g.add_node("pause_between",
time="%d seconds" % self.config.pause_between)
g.add_edge("start", "pause_initial")
g.add_edge("pause_initial", "stream")
+ g.add_edge("pause_initial", "pause_between")
+ g.add_edge("pause_between", "stream")
+ g.add_edge("pause_between", "pause_between")
# only add an end node if we need to stop
- if self.config.continuous_transfers:
- # continuous mode, i.e., no end node
- g.add_edge("stream", "pause_between")
- else:
+ if not self.config.continuous_transfers:
# one-shot mode, i.e., end after configured number of transfers
g.add_node("end",
count="%d" % self.config.num_transfers)
# check for end condition after every transfer
g.add_edge("stream", "end")
- # if end condition not met, pause
- g.add_edge("end", "pause_between")
-
- g.add_edge("pause_between", "stream")
return g
diff --git a/onionperf/onionperf b/onionperf/onionperf
index a49982b..6a16da2 100755
--- a/onionperf/onionperf
+++ b/onionperf/onionperf
@@ -194,7 +194,7 @@ def main():
help="""the number of seconds TGen should wait before walking through its action graph""",
metavar="N", type=int,
action="store", dest="tgenpauseinitial",
- default=5)
+ default=300)
measure_parser.add_argument('--tgen-pause-between',
help="""the number of seconds TGen should wait in between two transfers""",
1
0

01 Sep '20
commit 1eea5e10700c76f8e1b37e626eaeaf96c5488150
Author: Philipp Winter <phw(a)nymity.ch>
Date: Fri Aug 14 11:29:01 2020 -0700
Use format string for consistency.
Thanks to Rob for pointing this out.
---
onionperf/model.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/onionperf/model.py b/onionperf/model.py
index 3bfe35f..d45763e 100644
--- a/onionperf/model.py
+++ b/onionperf/model.py
@@ -118,7 +118,7 @@ class TorperfModel(GeneratableTGenModel):
else:
# one-shot mode, i.e., end after configured number of transfers
g.add_node("end",
- count=str(self.config.num_transfers))
+ count="%d" % self.config.num_transfers)
# check for end condition after every transfer
g.add_edge("stream", "end")
# if end condition not met, pause
1
0

01 Sep '20
commit 4ff257c4270c0d1e5fd0f1ef76e640696ec2c514
Merge: e333be2 e9fd47d
Author: Karsten Loesing <karsten.loesing(a)gmx.net>
Date: Thu Aug 20 15:05:23 2020 +0200
Merge branch 'task-33399' into develop
onionperf/measurement.py | 7 ++++---
onionperf/monitor.py | 18 +++++++++++++-----
onionperf/onionperf | 9 ++++++++-
3 files changed, 25 insertions(+), 9 deletions(-)
1
0