tor-commits
Threads by month
- ----- 2025 -----
- July
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
November 2018
- 19 participants
- 2292 discussions

[tor-browser-build/master] Bug 26843: add projects/firefox-locale-bundle
by gk@torproject.org 22 Nov '18
by gk@torproject.org 22 Nov '18
22 Nov '18
commit 43c9452946313a5ab3dd064501daa05096db86eb
Author: Nicolas Vigier <boklm(a)torproject.org>
Date: Mon Nov 19 13:04:54 2018 +0100
Bug 26843: add projects/firefox-locale-bundle
projects/firefox-locale-bundle will clone or pull the mercurial
repositories for each locale into the hg_clones/firefox-locale-bundle
directory, and generate tarballs using the changesets listed in the
firefox json file. The script get_hg_hash is used to parse
this json file and output the hg changeset for the selected locale.
Patch based on work started by GeKo.
---
README | 9 ++++++---
projects/firefox-locale-bundle/build | 29 +++++++++++++++++++++++++++++
projects/firefox-locale-bundle/config | 12 ++++++++++++
projects/firefox-locale-bundle/get_hg_hash | 21 +++++++++++++++++++++
4 files changed, 68 insertions(+), 3 deletions(-)
diff --git a/README b/README
index a20b659..8ebef9b 100644
--- a/README
+++ b/README
@@ -14,8 +14,10 @@ to extract container file systems, start containers and copy files to and
from containers.
The sources of most components are downloaded using git, which needs to
-be installed. The sources of webrtc are downloaded using gclient, which
-requires GTK+ 2.0 development files and curl to be installed.
+be installed. Some components are downloaded using mercurial which also
+needs to be installed. The sources of webrtc are downloaded using
+gclient, which requires GTK+ 2.0 development files and curl to be
+installed.
You also need a few perl modules installed:
- YAML::XS
@@ -41,7 +43,8 @@ If you are running Debian or Ubuntu, you can install them with:
libio-captureoutput-perl libpath-tiny-perl \
libstring-shellquote-perl libsort-versions-perl \
libdigest-sha-perl libdata-uuid-perl libdata-dump-perl \
- libfile-copy-recursive-perl git libgtk2.0-dev curl runc
+ libfile-copy-recursive-perl git libgtk2.0-dev curl runc \
+ mercurial
The build system is based on rbm, which is included as a git submodule
in the rbm/ directory. You can fetch the rbm git submodule by running
diff --git a/projects/firefox-locale-bundle/build b/projects/firefox-locale-bundle/build
new file mode 100644
index 0000000..3fec48e
--- /dev/null
+++ b/projects/firefox-locale-bundle/build
@@ -0,0 +1,29 @@
+#!/bin/bash
+
+[% c("var/set_default_env") -%]
+
+clone_dir='[% c("basedir") %]/hg_clones/[% project %]'
+mkdir -p "$clone_dir"
+cd "$clone_dir"
+tmpdir=$(mktemp -d)
+
+[% FOREACH lang = c('var/locales') %]
+ [% SET lang = tmpl(lang);
+ SET hgurl = "https://hg.mozilla.org/l10n-central/" _ lang %]
+ if test -d [% lang %]
+ then
+ cd [% lang %]
+ [% c("hg") %] pull [% hgurl %]
+ else
+ [% c("hg") %] clone [% hgurl %] [% lang %]
+ cd [% lang %]
+ fi
+ hg_hash=$([% c("basedir") %]/projects/firefox-locale-bundle/get_hg_hash \
+ $rootdir/[% c('input_files_by_name/firefox_json') %] \
+ [% lang %])
+ [% c("hg") %] archive -r "$hg_hash" -t files "$tmpdir"/[% lang %]
+ cd ..
+[% END %]
+
+tar -C "$tmpdir" -czf [% dest_dir %]/[% c("filename") %] .
+rm -Rf "$tmpdir"
diff --git a/projects/firefox-locale-bundle/config b/projects/firefox-locale-bundle/config
new file mode 100644
index 0000000..13c5fb8
--- /dev/null
+++ b/projects/firefox-locale-bundle/config
@@ -0,0 +1,12 @@
+# vim: filetype=yaml sw=2
+version: '[% c("var/ff_version") %]-[% c("var/ff_build") %]'
+filename: '[% project %]-[% c("version") %]-[% c("var/build_id") %].tar.gz'
+
+var:
+ use_container: 0
+ ff_version: '[% pc("firefox", "var/firefox_version") %]'
+ ff_build: build1
+
+input_files:
+ - name: firefox_json
+ URL: 'https://product-details.mozilla.org/1.0/l10n/Firefox-[% c("var/ff_version") %]-[% c("var/ff_build") %].json'
diff --git a/projects/firefox-locale-bundle/get_hg_hash b/projects/firefox-locale-bundle/get_hg_hash
new file mode 100755
index 0000000..0531113
--- /dev/null
+++ b/projects/firefox-locale-bundle/get_hg_hash
@@ -0,0 +1,21 @@
+#!/usr/bin/perl -w
+use strict;
+use File::Slurp;
+use JSON;
+
+sub exit_error {
+ print STDERR "Error: ", $_[0], "\n";
+ exit (exists $_[1] ? $_[1] : 1);
+}
+
+exit_error("Wrong number of arguments", 1) unless @ARGV == 2;
+
+my ($file, $locale) = @ARGV;
+my $json_text = read_file($file);
+exit_error("Error reading $file", 2) unless defined $json_text;
+
+my $data = decode_json($json_text);
+
+my $changeset = $data->{locales}{$locale}{changeset};
+exit_error("Can't find locale $locale in $file", 3) unless $changeset;
+print "$changeset\n";
1
0

[metrics-web/master] Stop generating servers.csv in legacy module.
by karsten@torproject.org 22 Nov '18
by karsten@torproject.org 22 Nov '18
22 Nov '18
commit 6765b8ec054bfac54bff188087031e4cf6a46d64
Author: Karsten Loesing <karsten.loesing(a)gmx.net>
Date: Mon Nov 12 10:28:06 2018 +0100
Stop generating servers.csv in legacy module.
Also stop importing bridge network size statistics into the database.
Required changes to existing legacy.config (removals):
ImportSanitizedBridges
SanitizedBridgesDirectory
KeepSanitizedBridgesImportHistory
WriteBridgeStats
Required schema changes to live tordir databases:
DROP VIEW stats_servers;
CREATE OR REPLACE FUNCTION refresh_all() [...]
DROP TABLE bridge_network_size;
DROP FUNCTION refresh_relay_versions();
DROP FUNCTION refresh_relay_platforms();
DROP FUNCTION refresh_network_size();
DROP TABLE relay_versions;
DROP TABLE relay_platforms;
DROP TABLE relay_countries;
DROP TABLE network_size;
Part of #28116.
---
.../metrics/stats/servers/Configuration.java | 35 --
.../stats/servers/ConsensusStatsFileHandler.java | 398 ---------------------
.../org/torproject/metrics/stats/servers/Main.java | 17 -
src/main/resources/legacy.config.template | 15 -
src/main/sql/legacy/tordir.sql | 284 ---------------
5 files changed, 749 deletions(-)
diff --git a/src/main/java/org/torproject/metrics/stats/servers/Configuration.java b/src/main/java/org/torproject/metrics/stats/servers/Configuration.java
index 8435b90..c4597bc 100644
--- a/src/main/java/org/torproject/metrics/stats/servers/Configuration.java
+++ b/src/main/java/org/torproject/metrics/stats/servers/Configuration.java
@@ -30,12 +30,6 @@ public class Configuration {
private boolean keepDirectoryArchiveImportHistory = false;
- private boolean importSanitizedBridges = false;
-
- private String sanitizedBridgesDirectory = "in/bridge-descriptors/";
-
- private boolean keepSanitizedBridgesImportHistory = false;
-
private boolean writeRelayDescriptorDatabase = false;
private String relayDescriptorDatabaseJdbc =
@@ -45,8 +39,6 @@ public class Configuration {
private String relayDescriptorRawFilesDirectory = "pg-import/";
- private boolean writeBridgeStats = false;
-
/** Initializes this configuration class. */
public Configuration() {
@@ -67,14 +59,6 @@ public class Configuration {
} else if (line.startsWith("KeepDirectoryArchiveImportHistory")) {
this.keepDirectoryArchiveImportHistory = Integer.parseInt(
line.split(" ")[1]) != 0;
- } else if (line.startsWith("ImportSanitizedBridges")) {
- this.importSanitizedBridges = Integer.parseInt(
- line.split(" ")[1]) != 0;
- } else if (line.startsWith("SanitizedBridgesDirectory")) {
- this.sanitizedBridgesDirectory = line.split(" ")[1];
- } else if (line.startsWith("KeepSanitizedBridgesImportHistory")) {
- this.keepSanitizedBridgesImportHistory = Integer.parseInt(
- line.split(" ")[1]) != 0;
} else if (line.startsWith("WriteRelayDescriptorDatabase")) {
this.writeRelayDescriptorDatabase = Integer.parseInt(
line.split(" ")[1]) != 0;
@@ -85,9 +69,6 @@ public class Configuration {
line.split(" ")[1]) != 0;
} else if (line.startsWith("RelayDescriptorRawFilesDirectory")) {
this.relayDescriptorRawFilesDirectory = line.split(" ")[1];
- } else if (line.startsWith("WriteBridgeStats")) {
- this.writeBridgeStats = Integer.parseInt(
- line.split(" ")[1]) != 0;
} else if (!line.startsWith("#") && line.length() > 0) {
log.error("Configuration file contains unrecognized "
+ "configuration key in line '{}'! Exiting!", line);
@@ -136,18 +117,6 @@ public class Configuration {
return this.writeRelayDescriptorDatabase;
}
- public boolean getImportSanitizedBridges() {
- return this.importSanitizedBridges;
- }
-
- public String getSanitizedBridgesDirectory() {
- return this.sanitizedBridgesDirectory;
- }
-
- public boolean getKeepSanitizedBridgesImportHistory() {
- return this.keepSanitizedBridgesImportHistory;
- }
-
public String getRelayDescriptorDatabaseJdbc() {
return this.relayDescriptorDatabaseJdbc;
}
@@ -159,9 +128,5 @@ public class Configuration {
public String getRelayDescriptorRawFilesDirectory() {
return this.relayDescriptorRawFilesDirectory;
}
-
- public boolean getWriteBridgeStats() {
- return this.writeBridgeStats;
- }
}
diff --git a/src/main/java/org/torproject/metrics/stats/servers/ConsensusStatsFileHandler.java b/src/main/java/org/torproject/metrics/stats/servers/ConsensusStatsFileHandler.java
deleted file mode 100644
index 960069c..0000000
--- a/src/main/java/org/torproject/metrics/stats/servers/ConsensusStatsFileHandler.java
+++ /dev/null
@@ -1,398 +0,0 @@
-/* Copyright 2011--2018 The Tor Project
- * See LICENSE for licensing information */
-
-package org.torproject.metrics.stats.servers;
-
-import org.torproject.descriptor.BridgeNetworkStatus;
-import org.torproject.descriptor.Descriptor;
-import org.torproject.descriptor.DescriptorReader;
-import org.torproject.descriptor.DescriptorSourceFactory;
-import org.torproject.descriptor.NetworkStatusEntry;
-
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import java.io.BufferedReader;
-import java.io.BufferedWriter;
-import java.io.File;
-import java.io.FileReader;
-import java.io.FileWriter;
-import java.io.IOException;
-import java.sql.Connection;
-import java.sql.DriverManager;
-import java.sql.PreparedStatement;
-import java.sql.ResultSet;
-import java.sql.SQLException;
-import java.sql.Statement;
-import java.text.ParseException;
-import java.text.SimpleDateFormat;
-import java.util.HashMap;
-import java.util.Map;
-import java.util.SortedMap;
-import java.util.TimeZone;
-import java.util.TreeMap;
-
-/**
- * Generates statistics on the average number of relays and bridges per
- * day. Accepts parse results from {@code RelayDescriptorParser} and
- * {@code BridgeDescriptorParser} and stores them in intermediate
- * result files {@code stats/consensus-stats-raw} and
- * {@code stats/bridge-consensus-stats-raw}. Writes final results to
- * {@code stats/consensus-stats} for all days for which at least half
- * of the expected consensuses or statuses are known.
- */
-public class ConsensusStatsFileHandler {
-
- /**
- * Intermediate results file holding the number of running bridges per
- * bridge status.
- */
- private File bridgeConsensusStatsRawFile;
-
- /**
- * Number of running bridges in a given bridge status. Map keys are the bridge
- * status time formatted as "yyyy-MM-dd HH:mm:ss", a comma, and the bridge
- * authority nickname, map values are lines as read from
- * {@code stats/bridge-consensus-stats-raw}.
- */
- private SortedMap<String, String> bridgesRaw;
-
- /**
- * Average number of running bridges per day. Map keys are dates
- * formatted as "yyyy-MM-dd", map values are the remaining columns as written
- * to {@code stats/consensus-stats}.
- */
- private SortedMap<String, String> bridgesPerDay;
-
- private static Logger log = LoggerFactory.getLogger(
- ConsensusStatsFileHandler.class);
-
- private int bridgeResultsAdded = 0;
-
- /* Database connection string. */
- private String connectionUrl;
-
- private SimpleDateFormat dateTimeFormat;
-
- private File bridgesDir;
-
- private File statsDirectory;
-
- private boolean keepImportHistory;
-
- /**
- * Initializes this class, including reading in intermediate results
- * files {@code stats/consensus-stats-raw} and
- * {@code stats/bridge-consensus-stats-raw} and final results file
- * {@code stats/consensus-stats}.
- */
- public ConsensusStatsFileHandler(String connectionUrl,
- File bridgesDir, File statsDirectory,
- boolean keepImportHistory) {
-
- if (bridgesDir == null || statsDirectory == null) {
- throw new IllegalArgumentException();
- }
- this.bridgesDir = bridgesDir;
- this.statsDirectory = statsDirectory;
- this.keepImportHistory = keepImportHistory;
-
- /* Initialize local data structures to hold intermediate and final
- * results. */
- this.bridgesPerDay = new TreeMap<>();
- this.bridgesRaw = new TreeMap<>();
-
- /* Initialize file names for intermediate and final results files. */
- this.bridgeConsensusStatsRawFile = new File(
- "stats/bridge-consensus-stats-raw");
-
- /* Initialize database connection string. */
- this.connectionUrl = connectionUrl;
-
- this.dateTimeFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
- this.dateTimeFormat.setTimeZone(TimeZone.getTimeZone("UTC"));
-
- /* Read in number of running bridges per bridge status. */
- if (this.bridgeConsensusStatsRawFile.exists()) {
- log.debug("Reading file {}...",
- this.bridgeConsensusStatsRawFile.getAbsolutePath());
- try (BufferedReader br = new BufferedReader(new FileReader(
- this.bridgeConsensusStatsRawFile))) {
- String line;
- while ((line = br.readLine()) != null) {
- if (line.startsWith("date")) {
- /* Skip headers. */
- continue;
- }
- String[] parts = line.split(",");
- if (parts.length < 2 || parts.length > 4) {
- log.warn("Corrupt line '{}' in file {}! Aborting to read this "
- + "file!", line,
- this.bridgeConsensusStatsRawFile.getAbsolutePath());
- break;
- }
- /* Assume that all lines without authority nickname are based on
- * Tonga's network status, not Bifroest's. */
- String key = parts[0] + "," + (parts.length < 4 ? "Tonga" : parts[1]);
- String value = null;
- if (parts.length == 2) {
- value = key + "," + parts[1] + ",0";
- } else if (parts.length == 3) {
- value = key + "," + parts[1] + "," + parts[2];
- } else if (parts.length == 4) {
- value = key + "," + parts[2] + "," + parts[3];
- } /* No more cases as we already checked the range above. */
- this.bridgesRaw.put(key, value);
- }
- log.debug("Finished reading file {}.",
- this.bridgeConsensusStatsRawFile.getAbsolutePath());
- } catch (IOException e) {
- log.warn("Failed to read file {}!",
- this.bridgeConsensusStatsRawFile.getAbsolutePath(), e);
- }
- }
- }
-
- /**
- * Adds the intermediate results of the number of running bridges in a
- * given bridge status to the existing observations.
- */
- public void addBridgeConsensusResults(long publishedMillis,
- String authorityNickname, int running, int runningEc2Bridges) {
- String publishedAuthority = dateTimeFormat.format(publishedMillis) + ","
- + authorityNickname;
- String line = publishedAuthority + "," + running + "," + runningEc2Bridges;
- if (!this.bridgesRaw.containsKey(publishedAuthority)) {
- log.debug("Adding new bridge numbers: {}", line);
- this.bridgesRaw.put(publishedAuthority, line);
- this.bridgeResultsAdded++;
- } else if (!line.equals(this.bridgesRaw.get(publishedAuthority))) {
- log.warn("The numbers of running bridges we were just given ({}) are "
- + "different from what we learned before ({})! Overwriting!", line,
- this.bridgesRaw.get(publishedAuthority));
- this.bridgesRaw.put(publishedAuthority, line);
- }
- }
-
- /** Imports sanitized bridge descriptors. */
- public void importSanitizedBridges() {
- if (bridgesDir.exists()) {
- log.debug("Importing files in directory {}/...", bridgesDir);
- DescriptorReader reader =
- DescriptorSourceFactory.createDescriptorReader();
- File historyFile = new File(statsDirectory,
- "consensus-stats-bridge-descriptor-history");
- if (keepImportHistory) {
- reader.setHistoryFile(historyFile);
- }
- for (Descriptor descriptor : reader.readDescriptors(bridgesDir)) {
- if (descriptor instanceof BridgeNetworkStatus) {
- String descriptorFileName = descriptor.getDescriptorFile().getName();
- String authority = null;
- if (descriptorFileName.contains(
- "4A0CCD2DDC7995083D73F5D667100C8A5831F16D")) {
- authority = "Tonga";
- } else if (descriptorFileName.contains(
- "1D8F3A91C37C5D1C4C19B1AD1D0CFBE8BF72D8E1")) {
- authority = "Bifroest";
- } else if (descriptorFileName.contains(
- "BA44A889E64B93FAA2B114E02C2A279A8555C533")) {
- authority = "Serge";
- }
- if (authority == null) {
- log.warn("Did not recognize the bridge authority that generated "
- + "{}. Skipping.", descriptorFileName);
- continue;
- }
- this.addBridgeNetworkStatus(
- (BridgeNetworkStatus) descriptor, authority);
- }
- }
- if (keepImportHistory) {
- reader.saveHistoryFile(historyFile);
- }
- log.info("Finished importing bridge descriptors.");
- }
- }
-
- private void addBridgeNetworkStatus(BridgeNetworkStatus status,
- String authority) {
- int runningBridges = 0;
- int runningEc2Bridges = 0;
- for (NetworkStatusEntry statusEntry
- : status.getStatusEntries().values()) {
- if (statusEntry.getFlags().contains("Running")) {
- runningBridges++;
- if (statusEntry.getNickname().startsWith("ec2bridge")) {
- runningEc2Bridges++;
- }
- }
- }
- this.addBridgeConsensusResults(status.getPublishedMillis(), authority,
- runningBridges, runningEc2Bridges);
- }
-
- /**
- * Aggregates the raw observations on relay and bridge numbers and
- * writes both raw and aggregate observations to disk.
- */
- public void writeFiles() {
-
- /* Go through raw observations and put everything into nested maps by day
- * and bridge authority. */
- Map<String, Map<String, int[]>> bridgesPerDayAndAuthority = new HashMap<>();
- for (String bridgesRawLine : this.bridgesRaw.values()) {
- String[] parts = bridgesRawLine.split(",");
- int brunning = Integer.parseInt(parts[2]);
- if (brunning <= 0) {
- /* Skip this status which contains zero bridges with the Running
- * flag. */
- continue;
- }
- String date = bridgesRawLine.substring(0, 10);
- bridgesPerDayAndAuthority.putIfAbsent(date, new TreeMap<>());
- String authority = parts[1];
- bridgesPerDayAndAuthority.get(date).putIfAbsent(authority, new int[3]);
- int[] bridges = bridgesPerDayAndAuthority.get(date).get(authority);
- bridges[0] += brunning;
- bridges[1] += Integer.parseInt(parts[3]);
- bridges[2]++;
- }
-
- /* Sum up average numbers of running bridges per day reported by all bridge
- * authorities and add these averages to final results. */
- for (Map.Entry<String, Map<String, int[]>> perDay
- : bridgesPerDayAndAuthority.entrySet()) {
- String date = perDay.getKey();
- int brunning = 0;
- int brunningEc2 = 0;
- for (int[] perAuthority : perDay.getValue().values()) {
- int statuses = perAuthority[2];
- if (statuses < 12) {
- /* Only write results if we have seen at least a dozen statuses. */
- continue;
- }
- brunning += perAuthority[0] / statuses;
- brunningEc2 += perAuthority[1] / statuses;
- }
- String line = "," + brunning + "," + brunningEc2;
- /* Are our results new? */
- if (!this.bridgesPerDay.containsKey(date)) {
- log.debug("Adding new average bridge numbers: {}{}", date, line);
- this.bridgesPerDay.put(date, line);
- } else if (!line.equals(this.bridgesPerDay.get(date))) {
- log.debug("Replacing existing average bridge numbers ({} with new "
- + "numbers: {}", this.bridgesPerDay.get(date), line);
- this.bridgesPerDay.put(date, line);
- }
- }
-
- /* Write raw numbers of running bridges to disk. */
- log.debug("Writing file {}...",
- this.bridgeConsensusStatsRawFile.getAbsolutePath());
- this.bridgeConsensusStatsRawFile.getParentFile().mkdirs();
- try (BufferedWriter bw = new BufferedWriter(
- new FileWriter(this.bridgeConsensusStatsRawFile))) {
- bw.append("datetime,authority,brunning,brunningec2");
- bw.newLine();
- for (String line : this.bridgesRaw.values()) {
- bw.append(line);
- bw.newLine();
- }
- log.debug("Finished writing file {}.",
- this.bridgeConsensusStatsRawFile.getAbsolutePath());
- } catch (IOException e) {
- log.warn("Failed to write file {}!",
- this.bridgeConsensusStatsRawFile.getAbsolutePath(), e);
- }
-
- /* Add average number of bridges per day to the database. */
- if (connectionUrl != null) {
- try {
- Map<String, String> updateRows = new HashMap<>();
- Map<String, String> insertRows = new HashMap<>(this.bridgesPerDay);
- Connection conn = DriverManager.getConnection(connectionUrl);
- conn.setAutoCommit(false);
- Statement statement = conn.createStatement();
- ResultSet rs = statement.executeQuery(
- "SELECT date, avg_running, avg_running_ec2 "
- + "FROM bridge_network_size");
- while (rs.next()) {
- String date = rs.getDate(1).toString();
- if (insertRows.containsKey(date)) {
- String insertRow = insertRows.remove(date);
- String[] parts = insertRow.substring(1).split(",");
- long newAvgRunning = Long.parseLong(parts[0]);
- long newAvgRunningEc2 = Long.parseLong(parts[1]);
- long oldAvgRunning = rs.getLong(2);
- long oldAvgRunningEc2 = rs.getLong(3);
- if (newAvgRunning != oldAvgRunning
- || newAvgRunningEc2 != oldAvgRunningEc2) {
- updateRows.put(date, insertRow);
- }
- }
- }
- rs.close();
- PreparedStatement psU = conn.prepareStatement(
- "UPDATE bridge_network_size SET avg_running = ?, "
- + "avg_running_ec2 = ? WHERE date = ?");
- for (Map.Entry<String, String> e : updateRows.entrySet()) {
- java.sql.Date date = java.sql.Date.valueOf(e.getKey());
- String[] parts = e.getValue().substring(1).split(",");
- long avgRunning = Long.parseLong(parts[0]);
- long avgRunningEc2 = Long.parseLong(parts[1]);
- psU.clearParameters();
- psU.setLong(1, avgRunning);
- psU.setLong(2, avgRunningEc2);
- psU.setDate(3, date);
- psU.executeUpdate();
- }
- PreparedStatement psI = conn.prepareStatement(
- "INSERT INTO bridge_network_size (avg_running, "
- + "avg_running_ec2, date) VALUES (?, ?, ?)");
- for (Map.Entry<String, String> e : insertRows.entrySet()) {
- java.sql.Date date = java.sql.Date.valueOf(e.getKey());
- String[] parts = e.getValue().substring(1).split(",");
- long avgRunning = Long.parseLong(parts[0]);
- long avgRunningEc2 = Long.parseLong(parts[1]);
- psI.clearParameters();
- psI.setLong(1, avgRunning);
- psI.setLong(2, avgRunningEc2);
- psI.setDate(3, date);
- psI.executeUpdate();
- }
- conn.commit();
- conn.close();
- } catch (SQLException e) {
- log.warn("Failed to add average bridge numbers to database.", e);
- }
- }
-
- /* Write stats. */
- StringBuilder dumpStats = new StringBuilder("Finished writing "
- + "statistics on bridge network statuses to disk.\nAdded "
- + this.bridgeResultsAdded + " bridge network status(es) in this "
- + "execution.");
- long now = System.currentTimeMillis();
- SimpleDateFormat dateTimeFormat =
- new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
- dateTimeFormat.setTimeZone(TimeZone.getTimeZone("UTC"));
- if (this.bridgesRaw.isEmpty()) {
- dumpStats.append("\nNo bridge status known yet.");
- } else {
- dumpStats.append("\nLast known bridge status was published ")
- .append(this.bridgesRaw.lastKey()).append(".");
- try {
- if (now - 6L * 60L * 60L * 1000L > dateTimeFormat.parse(
- this.bridgesRaw.lastKey()).getTime()) {
- log.warn("Last known bridge status is more than 6 hours old: {}",
- this.bridgesRaw.lastKey());
- }
- } catch (ParseException e) {
- log.warn("Can't parse the timestamp? Reason: {}", e);
- }
- }
- log.info(dumpStats.toString());
- }
-}
-
diff --git a/src/main/java/org/torproject/metrics/stats/servers/Main.java b/src/main/java/org/torproject/metrics/stats/servers/Main.java
index 080b6e4..4d349bc 100644
--- a/src/main/java/org/torproject/metrics/stats/servers/Main.java
+++ b/src/main/java/org/torproject/metrics/stats/servers/Main.java
@@ -54,23 +54,6 @@ public class Main {
}
}
- // Prepare consensus stats file handler (used for stats on running
- // bridges only)
- ConsensusStatsFileHandler csfh = config.getWriteBridgeStats()
- ? new ConsensusStatsFileHandler(
- config.getRelayDescriptorDatabaseJdbc(),
- new File(config.getSanitizedBridgesDirectory()),
- statsDirectory, config.getKeepSanitizedBridgesImportHistory())
- : null;
-
- // Import sanitized bridges and write updated stats files to disk
- if (csfh != null) {
- if (config.getImportSanitizedBridges()) {
- csfh.importSanitizedBridges();
- }
- csfh.writeFiles();
- }
-
// Remove lock file
lf.releaseLock();
diff --git a/src/main/resources/legacy.config.template b/src/main/resources/legacy.config.template
index afa8f2d..5475c1e 100644
--- a/src/main/resources/legacy.config.template
+++ b/src/main/resources/legacy.config.template
@@ -12,18 +12,6 @@
## again, but it can be confusing to users who don't know about it.
#KeepDirectoryArchiveImportHistory 0
#
-## Import sanitized bridges from disk, if available
-#ImportSanitizedBridges 0
-#
-## Relative path to directory to import sanitized bridges from
-#SanitizedBridgesDirectory /srv/metrics.torproject.org/metrics/shared/in/recent/bridge-descriptors/
-#
-## Keep a history of imported sanitized bridge descriptors. This history
-## can be useful when importing from a changing data source to avoid
-## importing descriptors more than once, but it can be confusing to users
-## who don't know about it.
-#KeepSanitizedBridgesImportHistory 0
-#
## Write relay descriptors to the database
#WriteRelayDescriptorDatabase 0
#
@@ -38,6 +26,3 @@
## files will be overwritten!
#RelayDescriptorRawFilesDirectory pg-import/
#
-## Write bridge stats to disk
-#WriteBridgeStats 0
-#
diff --git a/src/main/sql/legacy/tordir.sql b/src/main/sql/legacy/tordir.sql
index 16e7166..f1d6767 100644
--- a/src/main/sql/legacy/tordir.sql
+++ b/src/main/sql/legacy/tordir.sql
@@ -104,53 +104,6 @@ CREATE TABLE consensus (
CONSTRAINT consensus_pkey PRIMARY KEY (validafter)
);
--- TABLE network_size
-CREATE TABLE network_size (
- date DATE NOT NULL,
- avg_running INTEGER NOT NULL,
- avg_exit INTEGER NOT NULL,
- avg_guard INTEGER NOT NULL,
- avg_fast INTEGER NOT NULL,
- avg_stable INTEGER NOT NULL,
- avg_authority INTEGER NOT NULL,
- avg_badexit INTEGER NOT NULL,
- avg_baddirectory INTEGER NOT NULL,
- avg_hsdir INTEGER NOT NULL,
- avg_named INTEGER NOT NULL,
- avg_unnamed INTEGER NOT NULL,
- avg_valid INTEGER NOT NULL,
- avg_v2dir INTEGER NOT NULL,
- avg_v3dir INTEGER NOT NULL,
- CONSTRAINT network_size_pkey PRIMARY KEY(date)
-);
-
--- TABLE relay_countries
-CREATE TABLE relay_countries (
- date DATE NOT NULL,
- country CHARACTER(2) NOT NULL,
- relays INTEGER NOT NULL,
- CONSTRAINT relay_countries_pkey PRIMARY KEY(date, country)
-);
-
--- TABLE relay_platforms
-CREATE TABLE relay_platforms (
- date DATE NOT NULL,
- avg_linux INTEGER NOT NULL,
- avg_darwin INTEGER NOT NULL,
- avg_bsd INTEGER NOT NULL,
- avg_windows INTEGER NOT NULL,
- avg_other INTEGER NOT NULL,
- CONSTRAINT relay_platforms_pkey PRIMARY KEY(date)
-);
-
--- TABLE relay_versions
-CREATE TABLE relay_versions (
- date DATE NOT NULL,
- version CHARACTER(5) NOT NULL,
- relays INTEGER NOT NULL,
- CONSTRAINT relay_versions_pkey PRIMARY KEY(date, version)
-);
-
-- TABLE bandwidth_flags
CREATE TABLE bandwidth_flags (
date DATE NOT NULL,
@@ -299,157 +252,6 @@ $$ LANGUAGE plpgsql;
-- They find what new data has been entered or updated based on the
-- updates table.
--- FUNCTION refresh_network_size()
-CREATE OR REPLACE FUNCTION refresh_network_size() RETURNS INTEGER AS $$
- DECLARE
- min_date TIMESTAMP WITHOUT TIME ZONE;
- max_date TIMESTAMP WITHOUT TIME ZONE;
- BEGIN
-
- min_date := (SELECT MIN(date) FROM updates);
- max_date := (SELECT MAX(date) + 1 FROM updates);
-
- DELETE FROM network_size
- WHERE date IN (SELECT date FROM updates);
-
- EXECUTE '
- INSERT INTO network_size
- (date, avg_running, avg_exit, avg_guard, avg_fast, avg_stable,
- avg_authority, avg_badexit, avg_baddirectory, avg_hsdir,
- avg_named, avg_unnamed, avg_valid, avg_v2dir, avg_v3dir)
- SELECT date,
- isrunning / count AS avg_running,
- isexit / count AS avg_exit,
- isguard / count AS avg_guard,
- isfast / count AS avg_fast,
- isstable / count AS avg_stable,
- isauthority / count as avg_authority,
- isbadexit / count as avg_badexit,
- isbaddirectory / count as avg_baddirectory,
- ishsdir / count as avg_hsdir,
- isnamed / count as avg_named,
- isunnamed / count as avg_unnamed,
- isvalid / count as avg_valid,
- isv2dir / count as avg_v2dir,
- isv3dir / count as avg_v3dir
- FROM (
- SELECT DATE(validafter) AS date,
- COUNT(*) AS isrunning,
- COUNT(NULLIF(isexit, FALSE)) AS isexit,
- COUNT(NULLIF(isguard, FALSE)) AS isguard,
- COUNT(NULLIF(isfast, FALSE)) AS isfast,
- COUNT(NULLIF(isstable, FALSE)) AS isstable,
- COUNT(NULLIF(isauthority, FALSE)) AS isauthority,
- COUNT(NULLIF(isbadexit, FALSE)) AS isbadexit,
- COUNT(NULLIF(isbaddirectory, FALSE)) AS isbaddirectory,
- COUNT(NULLIF(ishsdir, FALSE)) AS ishsdir,
- COUNT(NULLIF(isnamed, FALSE)) AS isnamed,
- COUNT(NULLIF(isunnamed, FALSE)) AS isunnamed,
- COUNT(NULLIF(isvalid, FALSE)) AS isvalid,
- COUNT(NULLIF(isv2dir, FALSE)) AS isv2dir,
- COUNT(NULLIF(isv3dir, FALSE)) AS isv3dir
- FROM statusentry
- WHERE isrunning = TRUE
- AND validafter >= ''' || min_date || '''
- AND validafter < ''' || max_date || '''
- AND DATE(validafter) IN (SELECT date FROM updates)
- GROUP BY DATE(validafter)
- ) b
- NATURAL JOIN relay_statuses_per_day';
-
- RETURN 1;
- END;
-$$ LANGUAGE plpgsql;
-
--- FUNCTION refresh_relay_platforms()
-CREATE OR REPLACE FUNCTION refresh_relay_platforms() RETURNS INTEGER AS $$
- DECLARE
- min_date TIMESTAMP WITHOUT TIME ZONE;
- max_date TIMESTAMP WITHOUT TIME ZONE;
- BEGIN
-
- min_date := (SELECT MIN(date) FROM updates);
- max_date := (SELECT MAX(date) + 1 FROM updates);
-
- DELETE FROM relay_platforms
- WHERE date IN (SELECT date FROM updates);
-
- EXECUTE '
- INSERT INTO relay_platforms
- (date, avg_linux, avg_darwin, avg_bsd, avg_windows, avg_other)
- SELECT date,
- linux / count AS avg_linux,
- darwin / count AS avg_darwin,
- bsd / count AS avg_bsd,
- windows / count AS avg_windows,
- other / count AS avg_other
- FROM (
- SELECT DATE(validafter) AS date,
- SUM(CASE WHEN platform LIKE ''%Linux%'' THEN 1 ELSE 0 END)
- AS linux,
- SUM(CASE WHEN platform LIKE ''%Darwin%'' THEN 1 ELSE 0 END)
- AS darwin,
- SUM(CASE WHEN platform LIKE ''%BSD%'' THEN 1 ELSE 0 END)
- AS bsd,
- SUM(CASE WHEN platform LIKE ''%Windows%'' THEN 1 ELSE 0 END)
- AS windows,
- SUM(CASE WHEN platform NOT LIKE ''%Windows%''
- AND platform NOT LIKE ''%Darwin%''
- AND platform NOT LIKE ''%BSD%''
- AND platform NOT LIKE ''%Linux%'' THEN 1 ELSE 0 END)
- AS other
- FROM descriptor
- RIGHT JOIN statusentry
- ON statusentry.descriptor = descriptor.descriptor
- WHERE isrunning = TRUE
- AND validafter >= ''' || min_date || '''
- AND validafter < ''' || max_date || '''
- AND DATE(validafter) IN (SELECT date FROM updates)
- GROUP BY DATE(validafter)
- ) b
- NATURAL JOIN relay_statuses_per_day';
-
- RETURN 1;
- END;
-$$ LANGUAGE plpgsql;
-
--- FUNCTION refresh_relay_versions()
-CREATE OR REPLACE FUNCTION refresh_relay_versions() RETURNS INTEGER AS $$
- DECLARE
- min_date TIMESTAMP WITHOUT TIME ZONE;
- max_date TIMESTAMP WITHOUT TIME ZONE;
- BEGIN
-
- min_date := (SELECT MIN(date) FROM updates);
- max_date := (SELECT MAX(date) + 1 FROM updates);
-
- DELETE FROM relay_versions
- WHERE date IN (SELECT date FROM updates);
-
- EXECUTE '
- INSERT INTO relay_versions
- (date, version, relays)
- SELECT date, version, relays / count AS relays
- FROM (
- SELECT DATE(validafter),
- CASE WHEN platform LIKE ''Tor 0._._%'' THEN
- SUBSTRING(platform, 5, 5) ELSE ''Other'' END AS version,
- COUNT(*) AS relays
- FROM descriptor RIGHT JOIN statusentry
- ON descriptor.descriptor = statusentry.descriptor
- WHERE isrunning = TRUE
- AND platform IS NOT NULL
- AND validafter >= ''' || min_date || '''
- AND validafter < ''' || max_date || '''
- AND DATE(validafter) IN (SELECT date FROM updates)
- GROUP BY 1, 2
- ) b
- NATURAL JOIN relay_statuses_per_day';
-
- RETURN 1;
- END;
-$$ LANGUAGE plpgsql;
-
CREATE OR REPLACE FUNCTION refresh_bandwidth_flags() RETURNS INTEGER AS $$
DECLARE
min_date TIMESTAMP WITHOUT TIME ZONE;
@@ -581,20 +383,6 @@ CREATE OR REPLACE FUNCTION refresh_user_stats() RETURNS INTEGER AS $$
END;
$$ LANGUAGE plpgsql;
--- non-relay statistics
--- The following tables contain pre-aggregated statistics that are not
--- based on relay descriptors or that are not yet derived from the relay
--- descriptors in the database.
-
--- TABLE bridge_network_size
--- Contains average number of running bridges.
-CREATE TABLE bridge_network_size (
- "date" DATE NOT NULL,
- avg_running INTEGER NOT NULL,
- avg_running_ec2 INTEGER NOT NULL,
- CONSTRAINT bridge_network_size_pkey PRIMARY KEY(date)
-);
-
-- Refresh all statistics in the database.
CREATE OR REPLACE FUNCTION refresh_all() RETURNS INTEGER AS $$
BEGIN
@@ -605,12 +393,6 @@ CREATE OR REPLACE FUNCTION refresh_all() RETURNS INTEGER AS $$
INSERT INTO updates SELECT * FROM scheduled_updates;
RAISE NOTICE '% Refreshing relay statuses per day.', timeofday();
PERFORM refresh_relay_statuses_per_day();
- RAISE NOTICE '% Refreshing network size.', timeofday();
- PERFORM refresh_network_size();
- RAISE NOTICE '% Refreshing relay platforms.', timeofday();
- PERFORM refresh_relay_platforms();
- RAISE NOTICE '% Refreshing relay versions.', timeofday();
- PERFORM refresh_relay_versions();
RAISE NOTICE '% Refreshing total relay bandwidth.', timeofday();
PERFORM refresh_bandwidth_flags();
RAISE NOTICE '% Refreshing bandwidth history.', timeofday();
@@ -630,72 +412,6 @@ CREATE OR REPLACE FUNCTION refresh_all() RETURNS INTEGER AS $$
END;
$$ LANGUAGE plpgsql;
--- View for exporting server statistics.
-CREATE VIEW stats_servers AS
- (SELECT date, NULL AS flag, NULL AS country, NULL AS version,
- NULL AS platform, TRUE AS ec2bridge, NULL AS relays,
- avg_running_ec2 AS bridges FROM bridge_network_size
- WHERE date < current_date)
-UNION ALL
- (SELECT COALESCE(network_size.date, bridge_network_size.date) AS date,
- NULL AS flag, NULL AS country, NULL AS version, NULL AS platform,
- NULL AS ec2bridge, network_size.avg_running AS relays,
- bridge_network_size.avg_running AS bridges FROM network_size
- FULL OUTER JOIN bridge_network_size
- ON network_size.date = bridge_network_size.date
- WHERE COALESCE(network_size.date, bridge_network_size.date) <
- current_date)
-UNION ALL
- (SELECT date, 'Exit' AS flag, NULL AS country, NULL AS version,
- NULL AS platform, NULL AS ec2bridge, avg_exit AS relays,
- NULL AS bridges FROM network_size WHERE date < current_date)
-UNION ALL
- (SELECT date, 'Guard' AS flag, NULL AS country, NULL AS version,
- NULL AS platform, NULL AS ec2bridge, avg_guard AS relays,
- NULL AS bridges FROM network_size WHERE date < current_date)
-UNION ALL
- (SELECT date, 'Fast' AS flag, NULL AS country, NULL AS version,
- NULL AS platform, NULL AS ec2bridge, avg_fast AS relays,
- NULL AS bridges FROM network_size WHERE date < current_date)
-UNION ALL
- (SELECT date, 'Stable' AS flag, NULL AS country, NULL AS version,
- NULL AS platform, NULL AS ec2bridge, avg_stable AS relays,
- NULL AS bridges FROM network_size WHERE date < current_date)
-UNION ALL
- (SELECT date, 'HSDir' AS flag, NULL AS country, NULL AS version,
- NULL AS platform, NULL AS ec2bridge, avg_hsdir AS relays,
- NULL AS bridges FROM network_size WHERE date < current_date)
-UNION ALL
- (SELECT date, NULL AS flag, CASE WHEN country != 'zz' THEN country
- ELSE '??' END AS country, NULL AS version, NULL AS platform,
- NULL AS ec2bridge, relays, NULL AS bridges FROM relay_countries
- WHERE date < current_date)
-UNION ALL
- (SELECT date, NULL AS flag, NULL AS country, version, NULL AS platform,
- NULL AS ec2bridge, relays, NULL AS bridges FROM relay_versions
- WHERE date < current_date)
-UNION ALL
- (SELECT date, NULL AS flag, NULL AS country, NULL AS version,
- 'Linux' AS platform, NULL AS ec2bridge, avg_linux AS relays,
- NULL AS bridges FROM relay_platforms WHERE date < current_date)
-UNION ALL
- (SELECT date, NULL AS flag, NULL AS country, NULL AS version,
- 'Darwin' AS platform, NULL AS ec2bridge, avg_darwin AS relays,
- NULL AS bridges FROM relay_platforms WHERE date < current_date)
-UNION ALL
- (SELECT date, NULL AS flag, NULL AS country, NULL AS version,
- 'BSD' AS platform, NULL AS ec2bridge, avg_bsd AS relays,
- NULL AS bridges FROM relay_platforms WHERE date < current_date)
-UNION ALL
- (SELECT date, NULL AS flag, NULL AS country, NULL AS version,
- 'Windows' AS platform, NULL AS ec2bridge, avg_windows AS relays,
- NULL AS bridges FROM relay_platforms WHERE date < current_date)
-UNION ALL
- (SELECT date, NULL AS flag, NULL AS country, NULL AS version,
- 'Other' AS platform, NULL AS ec2bridge, avg_other AS relays,
- NULL AS bridges FROM relay_platforms WHERE date < current_date)
-ORDER BY date, flag, country, version, platform, ec2bridge;
-
-- View for exporting bandwidth statistics.
CREATE VIEW stats_bandwidth AS
(SELECT COALESCE(bandwidth_flags.date, bwhist_flags.date) AS date,
1
0
commit 88535e1f37ef47c2db61b2e6e0f0b0ab633d0a99
Author: Karsten Loesing <karsten.loesing(a)gmx.net>
Date: Wed Nov 14 17:05:18 2018 +0100
Fix ipv6servers unit tests.
---
.../metrics/stats/ipv6servers/Ipv6NetworkStatusTest.java | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/src/test/java/org/torproject/metrics/stats/ipv6servers/Ipv6NetworkStatusTest.java b/src/test/java/org/torproject/metrics/stats/ipv6servers/Ipv6NetworkStatusTest.java
index aaadcbf..2f3ca42 100644
--- a/src/test/java/org/torproject/metrics/stats/ipv6servers/Ipv6NetworkStatusTest.java
+++ b/src/test/java/org/torproject/metrics/stats/ipv6servers/Ipv6NetworkStatusTest.java
@@ -122,8 +122,11 @@ public class Ipv6NetworkStatusTest {
for (Ipv6NetworkStatus.Entry parsedEntry
: parsedNetworkStatus.entries) {
if (this.digest.equals(parsedEntry.digest)) {
- assertEquals(this.description, this.guard, parsedEntry.guard);
- assertEquals(this.description, this.exit, parsedEntry.exit);
+ assertEquals(this.description, this.guard,
+ parsedEntry.flags.contains("Guard"));
+ assertEquals(this.description, this.exit,
+ parsedEntry.flags.contains("Exit")
+ && !parsedEntry.flags.contains("BadExit"));
assertEquals(this.description, this.reachable,
parsedEntry.reachable);
foundEntry = true;
1
0

[metrics-web/master] Modernize legacy module and rename it to bwhist.
by karsten@torproject.org 22 Nov '18
by karsten@torproject.org 22 Nov '18
22 Nov '18
commit f8fa108d183968540eca529250cb142f8216ce8c
Author: Karsten Loesing <karsten.loesing(a)gmx.net>
Date: Wed Nov 14 10:39:24 2018 +0100
Modernize legacy module and rename it to bwhist.
Changes include using similar mechanisms for configuration, calling
the database aggregation function, querying the database, and writing
results as we're using in the ipv6servers and other modules.
Configuration options can now be changed via the following Java
properties:
bwhist.descriptors
bwhist.database
bwhist.history
bwhist.output
The legacy.config file, if one exists, will be ignored.
Part of #28116.
---
build.xml | 38 +-----
.../metrics/stats/bwhist/Configuration.java | 18 +++
.../org/torproject/metrics/stats/bwhist/Main.java | 56 +++++++++
.../RelayDescriptorDatabaseImporter.java | 131 +++++++++++++++------
.../torproject/metrics/stats/bwhist/Writer.java | 42 +++++++
.../metrics/stats/servers/Configuration.java | 87 --------------
.../org/torproject/metrics/stats/servers/Main.java | 40 -------
src/main/resources/legacy.config.template | 8 --
8 files changed, 212 insertions(+), 208 deletions(-)
diff --git a/build.xml b/build.xml
index b95550d..a391416 100644
--- a/build.xml
+++ b/build.xml
@@ -315,7 +315,7 @@
<antcall target="collectdescs" />
<antcall target="connbidirect" />
<antcall target="onionperf" />
- <antcall target="legacy" />
+ <antcall target="bwhist" />
<antcall target="advbwdist" />
<antcall target="hidserv" />
<antcall target="clients" />
@@ -340,39 +340,9 @@
<antcall target="run-java" />
</target>
- <!-- Provides legacy.config file from template. -->
- <target name="legacy-create-config" >
- <copy file="${resources}/legacy.config.template"
- tofile="${basedir}/legacy.config"/>
- </target>
-
- <!-- Expects legacy.config file in the base directory. -->
- <target name="legacy" >
- <property name="module.name" value="servers" />
- <property name="localmoddir" value="${modulebase}/${module.name}" />
- <property name="statsdir"
- value="${localmoddir}/stats" />
- <mkdir dir="${statsdir}" />
-
- <copy file="${basedir}/legacy.config"
- tofile="${localmoddir}/config"/>
-
+ <target name="bwhist" >
+ <property name="module.name" value="bwhist" />
<antcall target="run-java" />
-
- <exec executable="psql"
- dir="${localmoddir}"
- failonerror="true" >
- <arg value="--dbname=tordir"/>
- <arg value="-c SELECT * FROM refresh_all();" />
- </exec>
-
- <exec executable="psql"
- dir="${localmoddir}"
- failonerror="true" >
- <arg value="-c COPY (SELECT * FROM stats_bandwidth) TO STDOUT WITH CSV HEADER;" />
- <arg value="--dbname=tordir"/>
- <arg value="--output=${statsdir}/bandwidth.csv" />
- </exec>
</target>
<target name="advbwdist">
@@ -503,7 +473,7 @@
<fileset dir="${modulebase}/onionperf/stats" includes="*.csv" />
<fileset dir="${modulebase}/connbidirect/stats" includes="connbidirect2.csv" />
<fileset dir="${modulebase}/advbwdist/stats" includes="advbwdist.csv" />
- <fileset dir="${modulebase}/servers/stats" includes="*.csv" />
+ <fileset dir="${modulebase}/bwhist/stats" includes="*.csv" />
<fileset dir="${modulebase}/hidserv/stats" includes="hidserv.csv" />
<fileset dir="${modulebase}/clients/stats"
includes="clients*.csv userstats-combined.csv" />
diff --git a/src/main/java/org/torproject/metrics/stats/bwhist/Configuration.java b/src/main/java/org/torproject/metrics/stats/bwhist/Configuration.java
new file mode 100644
index 0000000..2a0fbc5
--- /dev/null
+++ b/src/main/java/org/torproject/metrics/stats/bwhist/Configuration.java
@@ -0,0 +1,18 @@
+/* Copyright 2011--2018 The Tor Project
+ * See LICENSE for licensing information */
+
+package org.torproject.metrics.stats.bwhist;
+
+/** Configuration options parsed from Java properties with reasonable hard-coded
+ * defaults. */
+public class Configuration {
+ static String descriptors = System.getProperty("bwhist.descriptors",
+ "../../shared/in/");
+ static String database = System.getProperty("bwhist.database",
+ "jdbc:postgresql:tordir");
+ static String history = System.getProperty("bwhist.history",
+ "status/read-descriptors");
+ static String output = System.getProperty("bwhist.output",
+ "stats/");
+}
+
diff --git a/src/main/java/org/torproject/metrics/stats/bwhist/Main.java b/src/main/java/org/torproject/metrics/stats/bwhist/Main.java
new file mode 100644
index 0000000..61c1435
--- /dev/null
+++ b/src/main/java/org/torproject/metrics/stats/bwhist/Main.java
@@ -0,0 +1,56 @@
+/* Copyright 2011--2018 The Tor Project
+ * See LICENSE for licensing information */
+
+package org.torproject.metrics.stats.bwhist;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.nio.file.Paths;
+import java.util.Arrays;
+
+/**
+ * Coordinate downloading and parsing of descriptors and extraction of
+ * statistically relevant data for later processing with R.
+ */
+public class Main {
+
+ private static Logger log = LoggerFactory.getLogger(Main.class);
+
+ private static String[][] paths = {
+ {"recent", "relay-descriptors", "consensuses"},
+ {"recent", "relay-descriptors", "extra-infos"},
+ {"archive", "relay-descriptors", "consensuses"},
+ {"archive", "relay-descriptors", "extra-infos"}};
+
+ /** Executes this data-processing module. */
+ public static void main(String[] args) throws Exception {
+
+ log.info("Starting bwhist module.");
+
+ log.info("Reading descriptors and inserting relevant parts into the "
+ + "database.");
+ File[] descriptorDirectories = Arrays.stream(paths).map((String[] path)
+ -> Paths.get(Configuration.descriptors, path).toFile())
+ .toArray(File[]::new);
+ File historyFile = new File(Configuration.history);
+ RelayDescriptorDatabaseImporter database
+ = new RelayDescriptorDatabaseImporter(descriptorDirectories,
+ historyFile, Configuration.database);
+ database.importRelayDescriptors();
+
+ log.info("Aggregating database entries.");
+ database.aggregate();
+
+ log.info("Querying aggregated statistics from the database.");
+ new Writer().write(Paths.get(Configuration.output, "bandwidth.csv"),
+ database.queryBandwidth());
+
+ log.info("Closing database connection.");
+ database.closeConnection();
+
+ log.info("Terminating bwhist module.");
+ }
+}
+
diff --git a/src/main/java/org/torproject/metrics/stats/servers/RelayDescriptorDatabaseImporter.java b/src/main/java/org/torproject/metrics/stats/bwhist/RelayDescriptorDatabaseImporter.java
similarity index 84%
rename from src/main/java/org/torproject/metrics/stats/servers/RelayDescriptorDatabaseImporter.java
rename to src/main/java/org/torproject/metrics/stats/bwhist/RelayDescriptorDatabaseImporter.java
index d1ae43c..a6cf0cc 100644
--- a/src/main/java/org/torproject/metrics/stats/servers/RelayDescriptorDatabaseImporter.java
+++ b/src/main/java/org/torproject/metrics/stats/bwhist/RelayDescriptorDatabaseImporter.java
@@ -1,7 +1,7 @@
/* Copyright 2011--2018 The Tor Project
* See LICENSE for licensing information */
-package org.torproject.metrics.stats.servers;
+package org.torproject.metrics.stats.bwhist;
import org.torproject.descriptor.Descriptor;
import org.torproject.descriptor.DescriptorReader;
@@ -20,6 +20,7 @@ import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
+import java.sql.Statement;
import java.sql.Timestamp;
import java.text.ParseException;
import java.text.SimpleDateFormat;
@@ -27,6 +28,7 @@ import java.util.ArrayList;
import java.util.Calendar;
import java.util.HashSet;
import java.util.List;
+import java.util.Locale;
import java.util.Map;
import java.util.Set;
import java.util.SortedSet;
@@ -108,22 +110,19 @@ public final class RelayDescriptorDatabaseImporter {
private boolean importIntoDatabase = true;
- private List<File> archivesDirectories;
+ private File[] descriptorDirectories;
- private File statsDirectory;
+ private File historyFile;
/**
* Initialize database importer by connecting to the database and
* preparing statements.
*/
- public RelayDescriptorDatabaseImporter(String connectionUrl,
- List<File> archivesDirectories, File statsDirectory) {
+ public RelayDescriptorDatabaseImporter(File[] descriptorDirectories,
+ File historyFile, String connectionUrl) {
- if (archivesDirectories == null || statsDirectory == null) {
- throw new IllegalArgumentException();
- }
- this.archivesDirectories = archivesDirectories;
- this.statsDirectory = statsDirectory;
+ this.descriptorDirectories = descriptorDirectories;
+ this.historyFile = historyFile;
if (connectionUrl != null) {
try {
@@ -520,29 +519,20 @@ public final class RelayDescriptorDatabaseImporter {
/** Imports relay descriptors into the database. */
public void importRelayDescriptors() {
- log.info("Importing files in directories " + archivesDirectories
- + "/...");
- if (!this.archivesDirectories.isEmpty()) {
- DescriptorReader reader =
- DescriptorSourceFactory.createDescriptorReader();
- reader.setMaxDescriptorsInQueue(10);
- File historyFile = new File(statsDirectory,
- "database-importer-relay-descriptor-history");
- reader.setHistoryFile(historyFile);
- for (Descriptor descriptor : reader.readDescriptors(
- this.archivesDirectories.toArray(
- new File[this.archivesDirectories.size()]))) {
- if (descriptor instanceof RelayNetworkStatusConsensus) {
- this.addRelayNetworkStatusConsensus(
- (RelayNetworkStatusConsensus) descriptor);
- } else if (descriptor instanceof ExtraInfoDescriptor) {
- this.addExtraInfoDescriptor((ExtraInfoDescriptor) descriptor);
- }
+ DescriptorReader reader =
+ DescriptorSourceFactory.createDescriptorReader();
+ reader.setMaxDescriptorsInQueue(10);
+ reader.setHistoryFile(this.historyFile);
+ for (Descriptor descriptor : reader.readDescriptors(
+ this.descriptorDirectories)) {
+ if (descriptor instanceof RelayNetworkStatusConsensus) {
+ this.addRelayNetworkStatusConsensus(
+ (RelayNetworkStatusConsensus) descriptor);
+ } else if (descriptor instanceof ExtraInfoDescriptor) {
+ this.addExtraInfoDescriptor((ExtraInfoDescriptor) descriptor);
}
- reader.saveHistoryFile(historyFile);
}
-
- log.info("Finished importing relay descriptors.");
+ reader.saveHistoryFile(this.historyFile);
}
private void addRelayNetworkStatusConsensus(
@@ -583,9 +573,9 @@ public final class RelayDescriptorDatabaseImporter {
}
/**
- * Close the relay descriptor database connection.
+ * Commit any non-commited parts.
*/
- public void closeConnection() {
+ public void commit() {
/* Log stats about imported descriptors. */
log.info("Finished importing relay descriptors: {} network status entries "
@@ -609,21 +599,84 @@ public final class RelayDescriptorDatabaseImporter {
}
}
- /* Commit any stragglers before closing. */
+ /* Commit any stragglers. */
if (this.conn != null) {
try {
this.csH.executeBatch();
this.conn.commit();
- } catch (SQLException e) {
+ } catch (SQLException e) {
log.warn("Could not commit final records to database", e);
}
- try {
- this.conn.close();
- } catch (SQLException e) {
- log.warn("Could not close database connection.", e);
+ }
+ }
+
+ /** Call the refresh_all() function to aggregate newly imported data. */
+ void aggregate() throws SQLException {
+ Statement st = this.conn.createStatement();
+ st.executeQuery("SELECT refresh_all()");
+ }
+
+ /** Query the servers_platforms view. */
+ List<String[]> queryBandwidth() throws SQLException {
+ List<String[]> statistics = new ArrayList<>();
+ String columns = "date, isexit, isguard, bwread, bwwrite, dirread, "
+ + "dirwrite";
+ statistics.add(columns.split(", "));
+ Statement st = this.conn.createStatement();
+ Calendar calendar = Calendar.getInstance(TimeZone.getTimeZone("UTC"),
+ Locale.US);
+ String queryString = "SELECT " + columns + " FROM stats_bandwidth";
+ try (ResultSet rs = st.executeQuery(queryString)) {
+ while (rs.next()) {
+ String[] outputLine = new String[7];
+ outputLine[0] = rs.getDate("date", calendar).toLocalDate().toString();
+ outputLine[1] = getBooleanFromResultSet(rs, "isexit");
+ outputLine[2] = getBooleanFromResultSet(rs, "isguard");
+ outputLine[3] = getLongFromResultSet(rs, "bwread");
+ outputLine[4] = getLongFromResultSet(rs, "bwwrite");
+ outputLine[5] = getLongFromResultSet(rs, "dirread");
+ outputLine[6] = getLongFromResultSet(rs, "dirwrite");
+ statistics.add(outputLine);
}
}
+ return statistics;
+ }
+
+ /** Retrieve the <code>boolean</code> value of the designated column in the
+ * current row of the given <code>ResultSet</code> object and format it as a
+ * <code>String</code> object with <code>"t"</code> for <code>true</code> and
+ * <code>"f"</code> for <code>false</code>, or return <code>null</code> if the
+ * retrieved value was <code>NULL</code>. */
+ private static String getBooleanFromResultSet(ResultSet rs,
+ String columnLabel) throws SQLException {
+ boolean result = rs.getBoolean(columnLabel);
+ if (rs.wasNull()) {
+ return null;
+ } else {
+ return result ? "t" : "f";
+ }
+ }
+
+ /** Retrieve the <code>long</code> value of the designated column in the
+ * current row of the given <code>ResultSet</code> object and format it as a
+ * <code>String</code> object, or return <code>null</code> if the retrieved
+ * value was <code>NULL</code>. */
+ private static String getLongFromResultSet(ResultSet rs, String columnLabel)
+ throws SQLException {
+ long result = rs.getLong(columnLabel);
+ return rs.wasNull() ? null : String.valueOf(result);
+ }
+
+ /**
+ * Close the relay descriptor database connection.
+ */
+ public void closeConnection() {
+ try {
+ this.conn.close();
+ } catch (SQLException e) {
+ log.warn("Could not close database connection.", e);
+ }
}
}
diff --git a/src/main/java/org/torproject/metrics/stats/bwhist/Writer.java b/src/main/java/org/torproject/metrics/stats/bwhist/Writer.java
new file mode 100644
index 0000000..1ac1fd9
--- /dev/null
+++ b/src/main/java/org/torproject/metrics/stats/bwhist/Writer.java
@@ -0,0 +1,42 @@
+/* Copyright 2018 The Tor Project
+ * See LICENSE for licensing information */
+
+package org.torproject.metrics.stats.bwhist;
+
+import java.io.File;
+import java.io.IOException;
+import java.nio.charset.StandardCharsets;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.util.ArrayList;
+import java.util.List;
+
+/** Writer that takes output line objects and writes them to a file, preceded
+ * by a column header line. */
+class Writer {
+
+ /** Write output lines to the given file. */
+ void write(Path filePath, Iterable<String[]> outputLines)
+ throws IOException {
+ File parentFile = filePath.toFile().getParentFile();
+ if (null != parentFile && !parentFile.exists()) {
+ if (!parentFile.mkdirs()) {
+ throw new IOException("Unable to create parent directory of output "
+ + "file. Not writing this file.");
+ }
+ }
+ List<String> formattedOutputLines = new ArrayList<>();
+ for (String[] outputLine : outputLines) {
+ StringBuilder formattedOutputLine = new StringBuilder();
+ for (String outputLinePart : outputLine) {
+ formattedOutputLine.append(',');
+ if (null != outputLinePart) {
+ formattedOutputLine.append(outputLinePart);
+ }
+ }
+ formattedOutputLines.add(formattedOutputLine.substring(1));
+ }
+ Files.write(filePath, formattedOutputLines, StandardCharsets.UTF_8);
+ }
+}
+
diff --git a/src/main/java/org/torproject/metrics/stats/servers/Configuration.java b/src/main/java/org/torproject/metrics/stats/servers/Configuration.java
deleted file mode 100644
index b6ee397..0000000
--- a/src/main/java/org/torproject/metrics/stats/servers/Configuration.java
+++ /dev/null
@@ -1,87 +0,0 @@
-/* Copyright 2011--2018 The Tor Project
- * See LICENSE for licensing information */
-
-package org.torproject.metrics.stats.servers;
-
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import java.io.BufferedReader;
-import java.io.File;
-import java.io.FileReader;
-import java.io.IOException;
-import java.net.MalformedURLException;
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.List;
-
-/**
- * Initialize configuration with hard-coded defaults, overwrite with
- * configuration in config file, if exists, and answer Main.java about our
- * configuration.
- */
-public class Configuration {
-
- private static Logger log = LoggerFactory.getLogger(Configuration.class);
-
- private List<File> directoryArchivesDirectories = new ArrayList<>();
-
- private String relayDescriptorDatabaseJdbc =
- "jdbc:postgresql://localhost/tordir?user=metrics&password=password";
-
- /** Initializes this configuration class. */
- public Configuration() {
-
- /* Read config file, if present. */
- File configFile = new File("config");
- if (!configFile.exists()) {
- log.warn("Could not find config file.");
- return;
- }
- String line = null;
- try (BufferedReader br = new BufferedReader(new FileReader(configFile))) {
- while ((line = br.readLine()) != null) {
- if (line.startsWith("DirectoryArchivesDirectory")) {
- this.directoryArchivesDirectories.add(new File(line.split(" ")[1]));
- } else if (line.startsWith("RelayDescriptorDatabaseJDBC")) {
- this.relayDescriptorDatabaseJdbc = line.split(" ")[1];
- } else if (!line.startsWith("#") && line.length() > 0) {
- log.error("Configuration file contains unrecognized "
- + "configuration key in line '{}'! Exiting!", line);
- System.exit(1);
- }
- }
- } catch (ArrayIndexOutOfBoundsException e) {
- log.warn("Configuration file contains configuration key without value in "
- + "line '{}'. Exiting!", line);
- System.exit(1);
- } catch (MalformedURLException e) {
- log.warn("Configuration file contains illegal URL or IP:port pair in "
- + "line '{}'. Exiting!", line);
- System.exit(1);
- } catch (NumberFormatException e) {
- log.warn("Configuration file contains illegal value in line '{}' with "
- + "legal values being 0 or 1. Exiting!", line);
- System.exit(1);
- } catch (IOException e) {
- log.error("Unknown problem while reading config file! Exiting!", e);
- System.exit(1);
- }
- }
-
- /** Returns directories containing archived descriptors. */
- public List<File> getDirectoryArchivesDirectories() {
- if (this.directoryArchivesDirectories.isEmpty()) {
- String prefix = "../../shared/in/recent/relay-descriptors/";
- return Arrays.asList(new File(prefix + "consensuses/"),
- new File(prefix + "extra-infos/"));
- } else {
- return this.directoryArchivesDirectories;
- }
- }
-
- public String getRelayDescriptorDatabaseJdbc() {
- return this.relayDescriptorDatabaseJdbc;
- }
-}
-
diff --git a/src/main/java/org/torproject/metrics/stats/servers/Main.java b/src/main/java/org/torproject/metrics/stats/servers/Main.java
deleted file mode 100644
index 1454418..0000000
--- a/src/main/java/org/torproject/metrics/stats/servers/Main.java
+++ /dev/null
@@ -1,40 +0,0 @@
-/* Copyright 2011--2018 The Tor Project
- * See LICENSE for licensing information */
-
-package org.torproject.metrics.stats.servers;
-
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import java.io.File;
-
-/**
- * Coordinate downloading and parsing of descriptors and extraction of
- * statistically relevant data for later processing with R.
- */
-public class Main {
-
- private static Logger log = LoggerFactory.getLogger(Main.class);
-
- /** Executes this data-processing module. */
- public static void main(String[] args) {
-
- log.info("Starting ERNIE.");
-
- // Initialize configuration
- Configuration config = new Configuration();
-
- // Define stats directory for temporary files
- File statsDirectory = new File("stats");
-
- // Import relay descriptors
- RelayDescriptorDatabaseImporter rddi = new RelayDescriptorDatabaseImporter(
- config.getRelayDescriptorDatabaseJdbc(),
- config.getDirectoryArchivesDirectories(), statsDirectory);
- rddi.importRelayDescriptors();
- rddi.closeConnection();
-
- log.info("Terminating ERNIE.");
- }
-}
-
diff --git a/src/main/resources/legacy.config.template b/src/main/resources/legacy.config.template
deleted file mode 100644
index e2e0dac..0000000
--- a/src/main/resources/legacy.config.template
+++ /dev/null
@@ -1,8 +0,0 @@
-## Relative paths to directories to import directory archives from
-#DirectoryArchivesDirectory /srv/metrics.torproject.org/metrics/shared/in/recent/relay-descriptors/consensuses/
-#DirectoryArchivesDirectory /srv/metrics.torproject.org/metrics/shared/in/recent/relay-descriptors/server-descriptors/
-#DirectoryArchivesDirectory /srv/metrics.torproject.org/metrics/shared/in/recent/relay-descriptors/extra-infos/
-#
-## JDBC string for relay descriptor database
-#RelayDescriptorDatabaseJDBC jdbc:postgresql://localhost/tordir?user=metrics&password=password
-#
1
0

22 Nov '18
commit 09cfdfdff4efc1aa1cc60f53f7f1353a6193e6ad
Author: Karsten Loesing <karsten.loesing(a)gmx.net>
Date: Mon Nov 12 19:50:46 2018 +0100
Remove advbw column from bandwidth.csv.
Instead use advbw data from ipv6servers module.
As a result, we can stop aggregating advertised bandwidths in the
legacy module.
Required schema changes to live tordir databases:
DROP VIEW stats_bandwidth;
CREATE VIEW stats_bandwidth [...]
CREATE OR REPLACE FUNCTION refresh_all() [...]
DROP FUNCTION refresh_bandwidth_flags();
DROP FUNCTION refresh_relay_statuses_per_day();
DROP TABLE relay_statuses_per_day;
DROP TABLE bandwidth_flags;
DROP TABLE consensus;
DROP FUNCTION delete_old_descriptor();
DROP TABLE descriptor;
Part of #28116.
---
src/main/R/rserver/graphs.R | 58 +++---
.../metrics/stats/ipv6servers/Database.java | 22 ++
.../torproject/metrics/stats/ipv6servers/Main.java | 2 +
.../metrics/stats/servers/Configuration.java | 1 -
.../servers/RelayDescriptorDatabaseImporter.java | 232 +--------------------
src/main/sql/ipv6servers/init-ipv6servers.sql | 11 +
src/main/sql/legacy/tordir.sql | 135 +-----------
7 files changed, 73 insertions(+), 388 deletions(-)
diff --git a/src/main/R/rserver/graphs.R b/src/main/R/rserver/graphs.R
index 9dc8c2d..df108e2 100644
--- a/src/main/R/rserver/graphs.R
+++ b/src/main/R/rserver/graphs.R
@@ -446,16 +446,19 @@ write_platforms <- function(start_p = NULL, end_p = NULL, path_p) {
}
prepare_bandwidth <- function(start_p, end_p) {
- read.csv(paste(stats_dir, "bandwidth.csv", sep = ""),
+ advbw <- read.csv(paste(stats_dir, "advbw.csv", sep = ""),
+ colClasses = c("date" = "Date")) %>%
+ transmute(date, variable = "advbw", value = advbw * 8 / 1e9)
+ bwhist <- read.csv(paste(stats_dir, "bandwidth.csv", sep = ""),
colClasses = c("date" = "Date")) %>%
+ transmute(date, variable = "bwhist", value = (bwread + bwwrite) * 8 / 2e9)
+ rbind(advbw, bwhist) %>%
filter(if (!is.null(start_p)) date >= as.Date(start_p) else TRUE) %>%
filter(if (!is.null(end_p)) date <= as.Date(end_p) else TRUE) %>%
- filter(isexit != "") %>%
- filter(isguard != "") %>%
- group_by(date) %>%
- summarize(advbw = sum(advbw) * 8 / 1e9,
- bwhist = sum(bwread + bwwrite) * 8 / 2e9) %>%
- select(date, advbw, bwhist)
+ filter(!is.na(value)) %>%
+ group_by(date, variable) %>%
+ summarize(value = sum(value)) %>%
+ spread(variable, value)
}
plot_bandwidth <- function(start_p, end_p, path_p) {
@@ -810,33 +813,24 @@ write_connbidirect <- function(start_p = NULL, end_p = NULL, path_p) {
}
prepare_bandwidth_flags <- function(start_p, end_p) {
- b <- read.csv(paste(stats_dir, "bandwidth.csv", sep = ""),
- colClasses = c("date" = "Date"))
- b <- b %>%
+ advbw <- read.csv(paste(stats_dir, "advbw.csv", sep = ""),
+ colClasses = c("date" = "Date")) %>%
+ transmute(date, isguard, isexit, variable = "advbw",
+ value = advbw * 8 / 1e9)
+ bwhist <- read.csv(paste(stats_dir, "bandwidth.csv", sep = ""),
+ colClasses = c("date" = "Date")) %>%
+ transmute(date, isguard, isexit, variable = "bwhist",
+ value = (bwread + bwwrite) * 8 / 2e9)
+ rbind(advbw, bwhist) %>%
filter(if (!is.null(start_p)) date >= as.Date(start_p) else TRUE) %>%
filter(if (!is.null(end_p)) date <= as.Date(end_p) else TRUE) %>%
- filter(isexit != "") %>%
- filter(isguard != "")
- b <- data.frame(date = b$date,
- isexit = b$isexit == "t", isguard = b$isguard == "t",
- advbw = b$advbw * 8 / 1e9,
- bwhist = (b$bwread + b$bwwrite) * 8 / 2e9)
- b <- rbind(
- data.frame(b[b$isguard == TRUE, ], flag = "guard"),
- data.frame(b[b$isexit == TRUE, ], flag = "exit"))
- b <- data.frame(date = b$date, advbw = b$advbw, bwhist = b$bwhist,
- flag = b$flag)
- b <- aggregate(list(advbw = b$advbw, bwhist = b$bwhist),
- by = list(date = b$date, flag = b$flag), FUN = sum,
- na.rm = TRUE, na.action = NULL)
- b <- gather(b, type, value, -c(date, flag))
- bandwidth <- b[b$value > 0, ]
- bandwidth <- data.frame(date = bandwidth$date,
- variable = as.factor(paste(bandwidth$flag, "_", bandwidth$type,
- sep = "")), value = bandwidth$value)
- bandwidth$variable <- factor(bandwidth$variable,
- levels = levels(bandwidth$variable)[c(3, 4, 1, 2)])
- bandwidth
+ group_by(date, variable) %>%
+ summarize(exit = sum(value[isexit == "t"]),
+ guard = sum(value[isguard == "t"])) %>%
+ gather(flag, value, -date, -variable) %>%
+ unite(variable, flag, variable) %>%
+ mutate(variable = factor(variable,
+ levels = c("guard_advbw", "guard_bwhist", "exit_advbw", "exit_bwhist")))
}
plot_bandwidth_flags <- function(start_p, end_p, path_p) {
diff --git a/src/main/java/org/torproject/metrics/stats/ipv6servers/Database.java b/src/main/java/org/torproject/metrics/stats/ipv6servers/Database.java
index c3a1fec..b5efe3e 100644
--- a/src/main/java/org/torproject/metrics/stats/ipv6servers/Database.java
+++ b/src/main/java/org/torproject/metrics/stats/ipv6servers/Database.java
@@ -435,6 +435,28 @@ class Database implements AutoCloseable {
return statistics;
}
+ /** Query the bandwidth_advbw view. */
+ List<String[]> queryAdvbw() throws SQLException {
+ List<String[]> statistics = new ArrayList<>();
+ String columns = "date, isexit, isguard, advbw";
+ statistics.add(columns.split(", "));
+ Statement st = this.connection.createStatement();
+ Calendar calendar = Calendar.getInstance(TimeZone.getTimeZone("UTC"),
+ Locale.US);
+ String queryString = "SELECT " + columns + " FROM bandwidth_advbw";
+ try (ResultSet rs = st.executeQuery(queryString)) {
+ while (rs.next()) {
+ String[] outputLine = new String[4];
+ outputLine[0] = rs.getDate("date", calendar).toLocalDate().toString();
+ outputLine[1] = rs.getString("isexit");
+ outputLine[2] = rs.getString("isguard");
+ outputLine[3] = getLongFromResultSet(rs, "advbw");
+ statistics.add(outputLine);
+ }
+ }
+ return statistics;
+ }
+
/** Query the servers_networksize view. */
List<String[]> queryNetworksize() throws SQLException {
List<String[]> statistics = new ArrayList<>();
diff --git a/src/main/java/org/torproject/metrics/stats/ipv6servers/Main.java b/src/main/java/org/torproject/metrics/stats/ipv6servers/Main.java
index a91a74f..d322a2e 100644
--- a/src/main/java/org/torproject/metrics/stats/ipv6servers/Main.java
+++ b/src/main/java/org/torproject/metrics/stats/ipv6servers/Main.java
@@ -88,6 +88,8 @@ public class Main {
log.info("Querying aggregated statistics from the database.");
new Writer().write(Paths.get(Configuration.output, "ipv6servers.csv"),
database.queryServersIpv6());
+ new Writer().write(Paths.get(Configuration.output, "advbw.csv"),
+ database.queryAdvbw());
new Writer().write(Paths.get(Configuration.output, "networksize.csv"),
database.queryNetworksize());
new Writer().write(Paths.get(Configuration.output, "relayflags.csv"),
diff --git a/src/main/java/org/torproject/metrics/stats/servers/Configuration.java b/src/main/java/org/torproject/metrics/stats/servers/Configuration.java
index c4597bc..76788df 100644
--- a/src/main/java/org/torproject/metrics/stats/servers/Configuration.java
+++ b/src/main/java/org/torproject/metrics/stats/servers/Configuration.java
@@ -102,7 +102,6 @@ public class Configuration {
if (this.directoryArchivesDirectories.isEmpty()) {
String prefix = "../../shared/in/recent/relay-descriptors/";
return Arrays.asList(new File(prefix + "consensuses/"),
- new File(prefix + "server-descriptors/"),
new File(prefix + "extra-infos/"));
} else {
return this.directoryArchivesDirectories;
diff --git a/src/main/java/org/torproject/metrics/stats/servers/RelayDescriptorDatabaseImporter.java b/src/main/java/org/torproject/metrics/stats/servers/RelayDescriptorDatabaseImporter.java
index c9a6fa7..2d1ae47 100644
--- a/src/main/java/org/torproject/metrics/stats/servers/RelayDescriptorDatabaseImporter.java
+++ b/src/main/java/org/torproject/metrics/stats/servers/RelayDescriptorDatabaseImporter.java
@@ -9,7 +9,6 @@ import org.torproject.descriptor.DescriptorSourceFactory;
import org.torproject.descriptor.ExtraInfoDescriptor;
import org.torproject.descriptor.NetworkStatusEntry;
import org.torproject.descriptor.RelayNetworkStatusConsensus;
-import org.torproject.descriptor.ServerDescriptor;
import org.postgresql.util.PGbytea;
@@ -20,7 +19,6 @@ import java.io.BufferedWriter;
import java.io.File;
import java.io.FileWriter;
import java.io.IOException;
-import java.nio.charset.StandardCharsets;
import java.sql.CallableStatement;
import java.sql.Connection;
import java.sql.DriverManager;
@@ -28,7 +26,6 @@ import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Timestamp;
-import java.sql.Types;
import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.ArrayList;
@@ -44,10 +41,6 @@ import java.util.TreeSet;
/**
* Parse directory data.
*/
-
-/* TODO Split up this class and move its parts to cron.network,
- * cron.users, and status.relaysearch packages. Requires extensive
- * changes to the database schema though. */
public final class RelayDescriptorDatabaseImporter {
/**
@@ -58,20 +51,10 @@ public final class RelayDescriptorDatabaseImporter {
/* Counters to keep track of the number of records committed before
* each transaction. */
- private int rdsCount = 0;
-
- private int resCount = 0;
-
private int rhsCount = 0;
private int rrsCount = 0;
- private int rcsCount = 0;
-
- private int rvsCount = 0;
-
- private int rqsCount = 0;
-
/**
* Relay descriptor database connection.
*/
@@ -85,18 +68,6 @@ public final class RelayDescriptorDatabaseImporter {
private PreparedStatement psSs;
/**
- * Prepared statement to check whether a given server descriptor has
- * been imported into the database before.
- */
- private PreparedStatement psDs;
-
- /**
- * Prepared statement to check whether a given network status consensus
- * has been imported into the database before.
- */
- private PreparedStatement psCs;
-
- /**
* Set of dates that have been inserted into the database for being
* included in the next refresh run.
*/
@@ -115,22 +86,11 @@ public final class RelayDescriptorDatabaseImporter {
private PreparedStatement psR;
/**
- * Prepared statement to insert a server descriptor into the database.
- */
- private PreparedStatement psD;
-
- /**
* Callable statement to insert the bandwidth history of an extra-info
* descriptor into the database.
*/
private CallableStatement csH;
- /**
- * Prepared statement to insert a network status consensus into the
- * database.
- */
- private PreparedStatement psC;
-
private static Logger log
= LoggerFactory.getLogger(RelayDescriptorDatabaseImporter.class);
@@ -145,21 +105,11 @@ public final class RelayDescriptorDatabaseImporter {
private BufferedWriter statusentryOut;
/**
- * Raw import file containing server descriptors.
- */
- private BufferedWriter descriptorOut;
-
- /**
* Raw import file containing bandwidth histories.
*/
private BufferedWriter bwhistOut;
/**
- * Raw import file containing consensuses.
- */
- private BufferedWriter consensusOut;
-
- /**
* Date format to parse timestamps.
*/
private SimpleDateFormat dateTimeFormat;
@@ -212,10 +162,6 @@ public final class RelayDescriptorDatabaseImporter {
/* Prepare statements. */
this.psSs = conn.prepareStatement("SELECT fingerprint "
+ "FROM statusentry WHERE validafter = ?");
- this.psDs = conn.prepareStatement("SELECT COUNT(*) "
- + "FROM descriptor WHERE descriptor = ?");
- this.psCs = conn.prepareStatement("SELECT COUNT(*) "
- + "FROM consensus WHERE validafter = ?");
this.psR = conn.prepareStatement("INSERT INTO statusentry "
+ "(validafter, nickname, fingerprint, descriptor, "
+ "published, address, orport, dirport, isauthority, "
@@ -224,16 +170,8 @@ public final class RelayDescriptorDatabaseImporter {
+ "isvalid, isv2dir, isv3dir, version, bandwidth, ports, "
+ "rawdesc) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, "
+ "?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)");
- this.psD = conn.prepareStatement("INSERT INTO descriptor "
- + "(descriptor, nickname, address, orport, dirport, "
- + "fingerprint, bandwidthavg, bandwidthburst, "
- + "bandwidthobserved, platform, published, uptime, "
- + "extrainfo) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, "
- + "?)");
this.csH = conn.prepareCall("{call insert_bwhist(?, ?, ?, ?, ?, "
+ "?)}");
- this.psC = conn.prepareStatement("INSERT INTO consensus "
- + "(validafter) VALUES (?)");
this.psU = conn.prepareStatement("INSERT INTO scheduled_updates "
+ "(date) VALUES (?)");
this.scheduledUpdates = new HashSet<>();
@@ -390,95 +328,9 @@ public final class RelayDescriptorDatabaseImporter {
}
/**
- * Insert server descriptor into database.
- */
- public void addServerDescriptorContents(String descriptor,
- String nickname, String address, int orPort, int dirPort,
- String relayIdentifier, long bandwidthAvg, long bandwidthBurst,
- long bandwidthObserved, String platform, long published,
- Long uptime, String extraInfoDigest) {
- if (this.importIntoDatabase) {
- try {
- this.addDateToScheduledUpdates(published);
- this.addDateToScheduledUpdates(
- published + 24L * 60L * 60L * 1000L);
- Calendar cal = Calendar.getInstance(TimeZone.getTimeZone("UTC"));
- this.psDs.setString(1, descriptor);
- ResultSet rs = psDs.executeQuery();
- rs.next();
- if (rs.getInt(1) == 0) {
- this.psD.clearParameters();
- this.psD.setString(1, descriptor);
- this.psD.setString(2, nickname);
- this.psD.setString(3, address);
- this.psD.setInt(4, orPort);
- this.psD.setInt(5, dirPort);
- this.psD.setString(6, relayIdentifier);
- this.psD.setLong(7, bandwidthAvg);
- this.psD.setLong(8, bandwidthBurst);
- this.psD.setLong(9, bandwidthObserved);
- /* Remove all non-ASCII characters from the platform string, or
- * we'll make Postgres unhappy. Sun's JDK and OpenJDK behave
- * differently when creating a new String with a given encoding.
- * That's what the regexp below is for. */
- this.psD.setString(10, new String(platform.getBytes(),
- StandardCharsets.US_ASCII).replaceAll("[^\\p{ASCII}]",""));
- this.psD.setTimestamp(11, new Timestamp(published), cal);
- if (null != uptime) {
- this.psD.setLong(12, uptime);
- } else {
- this.psD.setNull(12, Types.BIGINT);
- }
- this.psD.setString(13, extraInfoDigest);
- this.psD.executeUpdate();
- rdsCount++;
- if (rdsCount % autoCommitCount == 0) {
- this.conn.commit();
- }
- }
- } catch (SQLException e) {
- log.warn("Could not add server "
- + "descriptor. We won't make any further SQL requests in "
- + "this execution.", e);
- this.importIntoDatabase = false;
- }
- }
- if (this.writeRawImportFiles) {
- try {
- if (this.descriptorOut == null) {
- new File(rawFilesDirectory).mkdirs();
- this.descriptorOut = new BufferedWriter(new FileWriter(
- rawFilesDirectory + "/descriptor.sql"));
- this.descriptorOut.write(" COPY descriptor (descriptor, "
- + "nickname, address, orport, dirport, fingerprint, "
- + "bandwidthavg, bandwidthburst, bandwidthobserved, "
- + "platform, published, uptime, extrainfo) FROM stdin;\n");
- }
- this.descriptorOut.write(descriptor.toLowerCase() + "\t"
- + nickname + "\t" + address + "\t" + orPort + "\t" + dirPort
- + "\t" + relayIdentifier + "\t" + bandwidthAvg + "\t"
- + bandwidthBurst + "\t" + bandwidthObserved + "\t"
- + (platform != null && platform.length() > 0
- ? new String(platform.getBytes(), StandardCharsets.US_ASCII)
- : "\\N") + "\t" + this.dateTimeFormat.format(published) + "\t"
- + (uptime >= 0 ? uptime : "\\N") + "\t"
- + (extraInfoDigest != null ? extraInfoDigest : "\\N")
- + "\n");
- } catch (IOException e) {
- log.warn("Could not write server "
- + "descriptor to raw database import file. We won't make "
- + "any further attempts to write raw import files in this "
- + "execution.", e);
- this.writeRawImportFiles = false;
- }
- }
- }
-
- /**
* Insert extra-info descriptor into database.
*/
- public void addExtraInfoDescriptorContents(String extraInfoDigest,
- String nickname, String fingerprint, long published,
+ public void addExtraInfoDescriptorContents(String fingerprint, long published,
List<String> bandwidthHistoryLines) {
if (!bandwidthHistoryLines.isEmpty()) {
this.addBandwidthHistory(fingerprint.toLowerCase(), published,
@@ -766,55 +618,6 @@ public final class RelayDescriptorDatabaseImporter {
}
}
- /**
- * Insert network status consensus into database.
- */
- public void addConsensus(long validAfter) {
- if (this.importIntoDatabase) {
- try {
- this.addDateToScheduledUpdates(validAfter);
- Calendar cal = Calendar.getInstance(TimeZone.getTimeZone("UTC"));
- Timestamp validAfterTimestamp = new Timestamp(validAfter);
- this.psCs.setTimestamp(1, validAfterTimestamp, cal);
- ResultSet rs = psCs.executeQuery();
- rs.next();
- if (rs.getInt(1) == 0) {
- this.psC.clearParameters();
- this.psC.setTimestamp(1, validAfterTimestamp, cal);
- this.psC.executeUpdate();
- rcsCount++;
- if (rcsCount % autoCommitCount == 0) {
- this.conn.commit();
- }
- }
- } catch (SQLException e) {
- log.warn("Could not add network status "
- + "consensus. We won't make any further SQL requests in "
- + "this execution.", e);
- this.importIntoDatabase = false;
- }
- }
- if (this.writeRawImportFiles) {
- try {
- if (this.consensusOut == null) {
- new File(rawFilesDirectory).mkdirs();
- this.consensusOut = new BufferedWriter(new FileWriter(
- rawFilesDirectory + "/consensus.sql"));
- this.consensusOut.write(" COPY consensus (validafter) "
- + "FROM stdin;\n");
- }
- String validAfterString = this.dateTimeFormat.format(validAfter);
- this.consensusOut.write(validAfterString + "\n");
- } catch (IOException e) {
- log.warn("Could not write network status "
- + "consensus to raw database import file. We won't make "
- + "any further attempts to write raw import files in this "
- + "execution.", e);
- this.writeRawImportFiles = false;
- }
- }
- }
-
/** Imports relay descriptors into the database. */
public void importRelayDescriptors() {
log.info("Importing files in directories " + archivesDirectories
@@ -834,8 +637,6 @@ public final class RelayDescriptorDatabaseImporter {
if (descriptor instanceof RelayNetworkStatusConsensus) {
this.addRelayNetworkStatusConsensus(
(RelayNetworkStatusConsensus) descriptor);
- } else if (descriptor instanceof ServerDescriptor) {
- this.addServerDescriptor((ServerDescriptor) descriptor);
} else if (descriptor instanceof ExtraInfoDescriptor) {
this.addExtraInfoDescriptor((ExtraInfoDescriptor) descriptor);
}
@@ -862,18 +663,6 @@ public final class RelayDescriptorDatabaseImporter {
statusEntry.getBandwidth(), statusEntry.getPortList(),
statusEntry.getStatusEntryBytes());
}
- this.addConsensus(consensus.getValidAfterMillis());
- }
-
- private void addServerDescriptor(ServerDescriptor descriptor) {
- this.addServerDescriptorContents(
- descriptor.getDigestSha1Hex(), descriptor.getNickname(),
- descriptor.getAddress(), descriptor.getOrPort(),
- descriptor.getDirPort(), descriptor.getFingerprint(),
- descriptor.getBandwidthRate(), descriptor.getBandwidthBurst(),
- descriptor.getBandwidthObserved(), descriptor.getPlatform(),
- descriptor.getPublishedMillis(), descriptor.getUptime(),
- descriptor.getExtraInfoDigestSha1Hex());
}
private void addExtraInfoDescriptor(ExtraInfoDescriptor descriptor) {
@@ -892,8 +681,7 @@ public final class RelayDescriptorDatabaseImporter {
bandwidthHistoryLines.add(
descriptor.getDirreqReadHistory().getLine());
}
- this.addExtraInfoDescriptorContents(descriptor.getDigestSha1Hex(),
- descriptor.getNickname(),
+ this.addExtraInfoDescriptorContents(
descriptor.getFingerprint().toLowerCase(),
descriptor.getPublishedMillis(), bandwidthHistoryLines);
}
@@ -904,12 +692,8 @@ public final class RelayDescriptorDatabaseImporter {
public void closeConnection() {
/* Log stats about imported descriptors. */
- log.info("Finished importing relay "
- + "descriptors: {} consensuses, {} network status entries, {} "
- + "votes, {} server descriptors, {} extra-info descriptors, {} "
- + "bandwidth history elements, and {} dirreq stats elements",
- rcsCount, rrsCount, rvsCount, rdsCount, resCount, rhsCount,
- rqsCount);
+ log.info("Finished importing relay descriptors: {} network status entries "
+ + "and {} bandwidth history elements", rrsCount, rhsCount);
/* Insert scheduled updates a second time, just in case the refresh
* run has started since inserting them the first time in which case
@@ -951,18 +735,10 @@ public final class RelayDescriptorDatabaseImporter {
this.statusentryOut.write("\\.\n");
this.statusentryOut.close();
}
- if (this.descriptorOut != null) {
- this.descriptorOut.write("\\.\n");
- this.descriptorOut.close();
- }
if (this.bwhistOut != null) {
this.bwhistOut.write("\\.\n");
this.bwhistOut.close();
}
- if (this.consensusOut != null) {
- this.consensusOut.write("\\.\n");
- this.consensusOut.close();
- }
} catch (IOException e) {
log.warn("Could not close one or more raw database import files.", e);
}
diff --git a/src/main/sql/ipv6servers/init-ipv6servers.sql b/src/main/sql/ipv6servers/init-ipv6servers.sql
index b478a49..c94a19d 100644
--- a/src/main/sql/ipv6servers/init-ipv6servers.sql
+++ b/src/main/sql/ipv6servers/init-ipv6servers.sql
@@ -312,6 +312,17 @@ GROUP BY DATE(valid_after), server, guard_relay, exit_relay, announced_ipv6,
ORDER BY valid_after_date, server, guard_relay, exit_relay, announced_ipv6,
exiting_ipv6_relay, reachable_ipv6_relay;
+-- View on advertised bandwidth by Exit/Guard flag combination.
+CREATE OR REPLACE VIEW bandwidth_advbw AS
+SELECT valid_after_date AS date,
+ exit_relay AS isexit,
+ guard_relay AS isguard,
+ FLOOR(SUM(advertised_bandwidth_bytes_sum_avg)) AS advbw
+FROM ipv6servers
+WHERE server = 'relay'
+GROUP BY date, isexit, isguard
+ORDER BY date, isexit, isguard;
+
-- View on the number of running servers by relay flag.
CREATE OR REPLACE VIEW servers_flags_complete AS
WITH included_statuses AS (
diff --git a/src/main/sql/legacy/tordir.sql b/src/main/sql/legacy/tordir.sql
index f1d6767..dfe7b5d 100644
--- a/src/main/sql/legacy/tordir.sql
+++ b/src/main/sql/legacy/tordir.sql
@@ -3,33 +3,6 @@
CREATE LANGUAGE plpgsql;
--- TABLE descriptor
--- Contains all of the descriptors published by routers.
-CREATE TABLE descriptor (
- descriptor CHARACTER(40) NOT NULL,
- nickname CHARACTER VARYING(19) NOT NULL,
- address CHARACTER VARYING(15) NOT NULL,
- orport INTEGER NOT NULL,
- dirport INTEGER NOT NULL,
- fingerprint CHARACTER(40) NOT NULL,
- bandwidthavg BIGINT NOT NULL,
- bandwidthburst BIGINT NOT NULL,
- bandwidthobserved BIGINT NOT NULL,
- platform CHARACTER VARYING(256),
- published TIMESTAMP WITHOUT TIME ZONE NOT NULL,
- uptime BIGINT,
- extrainfo CHARACTER(40),
- CONSTRAINT descriptor_pkey PRIMARY KEY (descriptor)
-);
-
-CREATE OR REPLACE FUNCTION delete_old_descriptor()
-RETURNS INTEGER AS $$
- BEGIN
- DELETE FROM descriptor WHERE DATE(published) < current_date - 14;
- RETURN 1;
- END;
-$$ LANGUAGE plpgsql;
-
-- Contains bandwidth histories reported by relays in extra-info
-- descriptors. Each row contains the reported bandwidth in 15-minute
-- intervals for each relay and date.
@@ -97,22 +70,6 @@ RETURNS INTEGER AS $$
END;
$$ LANGUAGE plpgsql;
--- TABLE consensus
--- Contains all of the consensuses published by the directories.
-CREATE TABLE consensus (
- validafter TIMESTAMP WITHOUT TIME ZONE NOT NULL,
- CONSTRAINT consensus_pkey PRIMARY KEY (validafter)
-);
-
--- TABLE bandwidth_flags
-CREATE TABLE bandwidth_flags (
- date DATE NOT NULL,
- isexit BOOLEAN NOT NULL,
- isguard BOOLEAN NOT NULL,
- bwadvertised BIGINT NOT NULL,
- CONSTRAINT bandwidth_flags_pkey PRIMARY KEY(date, isexit, isguard)
-);
-
-- TABLE bwhist_flags
CREATE TABLE bwhist_flags (
date DATE NOT NULL,
@@ -149,15 +106,6 @@ CREATE TABLE user_stats (
CONSTRAINT user_stats_pkey PRIMARY KEY(date, country)
);
--- TABLE relay_statuses_per_day
--- A helper table which is commonly used to update the tables above in the
--- refresh_* functions.
-CREATE TABLE relay_statuses_per_day (
- date DATE NOT NULL,
- count INTEGER NOT NULL,
- CONSTRAINT relay_statuses_per_day_pkey PRIMARY KEY(date)
-);
-
-- Dates to be included in the next refresh run.
CREATE TABLE scheduled_updates (
id SERIAL,
@@ -174,24 +122,6 @@ CREATE TABLE updates (
date DATE
);
--- FUNCTION refresh_relay_statuses_per_day()
--- Updates helper table which is used to refresh the aggregate tables.
-CREATE OR REPLACE FUNCTION refresh_relay_statuses_per_day()
-RETURNS INTEGER AS $$
- BEGIN
- DELETE FROM relay_statuses_per_day
- WHERE date IN (SELECT date FROM updates);
- INSERT INTO relay_statuses_per_day (date, count)
- SELECT DATE(validafter) AS date, COUNT(*) AS count
- FROM consensus
- WHERE DATE(validafter) >= (SELECT MIN(date) FROM updates)
- AND DATE(validafter) <= (SELECT MAX(date) FROM updates)
- AND DATE(validafter) IN (SELECT date FROM updates)
- GROUP BY DATE(validafter);
- RETURN 1;
- END;
-$$ LANGUAGE plpgsql;
-
CREATE OR REPLACE FUNCTION array_sum (BIGINT[]) RETURNS BIGINT AS $$
SELECT SUM($1[i])::bigint
FROM generate_series(array_lower($1, 1), array_upper($1, 1)) index(i);
@@ -247,45 +177,11 @@ $$ LANGUAGE plpgsql;
-- refresh_* functions
-- The following functions keep their corresponding aggregate tables
--- up-to-date. They should be called every time ERNIE is run, or when new
--- data is finished being added to the descriptor or statusentry tables.
+-- up-to-date. They should be called every time this module is run, or when new
+-- data is finished being added to the statusentry tables.
-- They find what new data has been entered or updated based on the
-- updates table.
-CREATE OR REPLACE FUNCTION refresh_bandwidth_flags() RETURNS INTEGER AS $$
- DECLARE
- min_date TIMESTAMP WITHOUT TIME ZONE;
- max_date TIMESTAMP WITHOUT TIME ZONE;
- BEGIN
-
- min_date := (SELECT MIN(date) FROM updates);
- max_date := (SELECT MAX(date) + 1 FROM updates);
-
- DELETE FROM bandwidth_flags WHERE date IN (SELECT date FROM updates);
- EXECUTE '
- INSERT INTO bandwidth_flags (date, isexit, isguard, bwadvertised)
- SELECT DATE(validafter) AS date,
- BOOL_OR(isexit) AS isexit,
- BOOL_OR(isguard) AS isguard,
- (SUM(LEAST(bandwidthavg, bandwidthobserved))
- / relay_statuses_per_day.count)::BIGINT AS bwadvertised
- FROM descriptor RIGHT JOIN statusentry
- ON descriptor.descriptor = statusentry.descriptor
- JOIN relay_statuses_per_day
- ON DATE(validafter) = relay_statuses_per_day.date
- WHERE isrunning = TRUE
- AND validafter >= ''' || min_date || '''
- AND validafter < ''' || max_date || '''
- AND DATE(validafter) IN (SELECT date FROM updates)
- AND relay_statuses_per_day.date >= ''' || min_date || '''
- AND relay_statuses_per_day.date < ''' || max_date || '''
- AND DATE(relay_statuses_per_day.date) IN
- (SELECT date FROM updates)
- GROUP BY DATE(validafter), isexit, isguard, relay_statuses_per_day.count';
- RETURN 1;
- END;
-$$ LANGUAGE plpgsql;
-
CREATE OR REPLACE FUNCTION refresh_bwhist_flags() RETURNS INTEGER AS $$
DECLARE
min_date TIMESTAMP WITHOUT TIME ZONE;
@@ -391,18 +287,12 @@ CREATE OR REPLACE FUNCTION refresh_all() RETURNS INTEGER AS $$
DELETE FROM updates;
RAISE NOTICE '% Copying scheduled dates.', timeofday();
INSERT INTO updates SELECT * FROM scheduled_updates;
- RAISE NOTICE '% Refreshing relay statuses per day.', timeofday();
- PERFORM refresh_relay_statuses_per_day();
- RAISE NOTICE '% Refreshing total relay bandwidth.', timeofday();
- PERFORM refresh_bandwidth_flags();
RAISE NOTICE '% Refreshing bandwidth history.', timeofday();
PERFORM refresh_bwhist_flags();
RAISE NOTICE '% Refreshing user statistics.', timeofday();
PERFORM refresh_user_stats();
RAISE NOTICE '% Deleting processed dates.', timeofday();
DELETE FROM scheduled_updates WHERE id IN (SELECT id FROM updates);
- RAISE NOTICE '% Deleting old descriptors.', timeofday();
- PERFORM delete_old_descriptor();
RAISE NOTICE '% Deleting old bandwidth histories.', timeofday();
PERFORM delete_old_bwhist();
RAISE NOTICE '% Deleting old status entries.', timeofday();
@@ -414,23 +304,14 @@ $$ LANGUAGE plpgsql;
-- View for exporting bandwidth statistics.
CREATE VIEW stats_bandwidth AS
- (SELECT COALESCE(bandwidth_flags.date, bwhist_flags.date) AS date,
- COALESCE(bandwidth_flags.isexit, bwhist_flags.isexit) AS isexit,
- COALESCE(bandwidth_flags.isguard, bwhist_flags.isguard) AS isguard,
- bandwidth_flags.bwadvertised AS advbw,
- CASE WHEN bwhist_flags.read IS NOT NULL
- THEN bwhist_flags.read / 86400 END AS bwread,
- CASE WHEN bwhist_flags.written IS NOT NULL
- THEN bwhist_flags.written / 86400 END AS bwwrite,
+ (SELECT date, isexit, isguard,
+ read / 86400 AS bwread,
+ written / 86400 AS bwwrite,
NULL AS dirread, NULL AS dirwrite
- FROM bandwidth_flags FULL OUTER JOIN bwhist_flags
- ON bandwidth_flags.date = bwhist_flags.date
- AND bandwidth_flags.isexit = bwhist_flags.isexit
- AND bandwidth_flags.isguard = bwhist_flags.isguard
- WHERE COALESCE(bandwidth_flags.date, bwhist_flags.date) <
- current_date - 2)
+ FROM bwhist_flags
+ WHERE date < current_date - 2)
UNION ALL
- (SELECT date, NULL AS isexit, NULL AS isguard, NULL AS advbw,
+ (SELECT date, NULL AS isexit, NULL AS isguard,
NULL AS bwread, NULL AS bwwrite,
FLOOR(CAST(dr AS NUMERIC) / CAST(86400 AS NUMERIC)) AS dirread,
FLOOR(CAST(dw AS NUMERIC) / CAST(86400 AS NUMERIC)) AS dirwrite
1
0

[metrics-web/master] Remove long unused code from legacy module.
by karsten@torproject.org 22 Nov '18
by karsten@torproject.org 22 Nov '18
22 Nov '18
commit ca5fa45df0cfb14801e3556caa93f2cc6d26d790
Author: Karsten Loesing <karsten.loesing(a)gmx.net>
Date: Tue Nov 13 18:10:18 2018 +0100
Remove long unused code from legacy module.
This includes the lock file, the option to write raw output files for
importing into the database, and a couple boolean config options that
have always been true.
Required changes to existing legacy.config (removals):
ImportDirectoryArchives
KeepDirectoryArchiveImportHistory
WriteRelayDescriptorDatabase
WriteRelayDescriptorsRawFiles
RelayDescriptorRawFilesDirectory
Part of #28116.
---
.../metrics/stats/servers/Configuration.java | 46 +-------
.../torproject/metrics/stats/servers/LockFile.java | 61 ----------
.../org/torproject/metrics/stats/servers/Main.java | 33 +-----
.../servers/RelayDescriptorDatabaseImporter.java | 126 +--------------------
src/main/resources/legacy.config.template | 20 ----
5 files changed, 10 insertions(+), 276 deletions(-)
diff --git a/src/main/java/org/torproject/metrics/stats/servers/Configuration.java b/src/main/java/org/torproject/metrics/stats/servers/Configuration.java
index 76788df..b6ee397 100644
--- a/src/main/java/org/torproject/metrics/stats/servers/Configuration.java
+++ b/src/main/java/org/torproject/metrics/stats/servers/Configuration.java
@@ -24,21 +24,11 @@ public class Configuration {
private static Logger log = LoggerFactory.getLogger(Configuration.class);
- private boolean importDirectoryArchives = false;
-
private List<File> directoryArchivesDirectories = new ArrayList<>();
- private boolean keepDirectoryArchiveImportHistory = false;
-
- private boolean writeRelayDescriptorDatabase = false;
-
private String relayDescriptorDatabaseJdbc =
"jdbc:postgresql://localhost/tordir?user=metrics&password=password";
- private boolean writeRelayDescriptorsRawFiles = false;
-
- private String relayDescriptorRawFilesDirectory = "pg-import/";
-
/** Initializes this configuration class. */
public Configuration() {
@@ -51,24 +41,10 @@ public class Configuration {
String line = null;
try (BufferedReader br = new BufferedReader(new FileReader(configFile))) {
while ((line = br.readLine()) != null) {
- if (line.startsWith("ImportDirectoryArchives")) {
- this.importDirectoryArchives = Integer.parseInt(
- line.split(" ")[1]) != 0;
- } else if (line.startsWith("DirectoryArchivesDirectory")) {
+ if (line.startsWith("DirectoryArchivesDirectory")) {
this.directoryArchivesDirectories.add(new File(line.split(" ")[1]));
- } else if (line.startsWith("KeepDirectoryArchiveImportHistory")) {
- this.keepDirectoryArchiveImportHistory = Integer.parseInt(
- line.split(" ")[1]) != 0;
- } else if (line.startsWith("WriteRelayDescriptorDatabase")) {
- this.writeRelayDescriptorDatabase = Integer.parseInt(
- line.split(" ")[1]) != 0;
} else if (line.startsWith("RelayDescriptorDatabaseJDBC")) {
this.relayDescriptorDatabaseJdbc = line.split(" ")[1];
- } else if (line.startsWith("WriteRelayDescriptorsRawFiles")) {
- this.writeRelayDescriptorsRawFiles = Integer.parseInt(
- line.split(" ")[1]) != 0;
- } else if (line.startsWith("RelayDescriptorRawFilesDirectory")) {
- this.relayDescriptorRawFilesDirectory = line.split(" ")[1];
} else if (!line.startsWith("#") && line.length() > 0) {
log.error("Configuration file contains unrecognized "
+ "configuration key in line '{}'! Exiting!", line);
@@ -93,10 +69,6 @@ public class Configuration {
}
}
- public boolean getImportDirectoryArchives() {
- return this.importDirectoryArchives;
- }
-
/** Returns directories containing archived descriptors. */
public List<File> getDirectoryArchivesDirectories() {
if (this.directoryArchivesDirectories.isEmpty()) {
@@ -108,24 +80,8 @@ public class Configuration {
}
}
- public boolean getKeepDirectoryArchiveImportHistory() {
- return this.keepDirectoryArchiveImportHistory;
- }
-
- public boolean getWriteRelayDescriptorDatabase() {
- return this.writeRelayDescriptorDatabase;
- }
-
public String getRelayDescriptorDatabaseJdbc() {
return this.relayDescriptorDatabaseJdbc;
}
-
- public boolean getWriteRelayDescriptorsRawFiles() {
- return this.writeRelayDescriptorsRawFiles;
- }
-
- public String getRelayDescriptorRawFilesDirectory() {
- return this.relayDescriptorRawFilesDirectory;
- }
}
diff --git a/src/main/java/org/torproject/metrics/stats/servers/LockFile.java b/src/main/java/org/torproject/metrics/stats/servers/LockFile.java
deleted file mode 100644
index c6063d1..0000000
--- a/src/main/java/org/torproject/metrics/stats/servers/LockFile.java
+++ /dev/null
@@ -1,61 +0,0 @@
-/* Copyright 2011--2018 The Tor Project
- * See LICENSE for licensing information */
-
-package org.torproject.metrics.stats.servers;
-
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import java.io.BufferedReader;
-import java.io.BufferedWriter;
-import java.io.File;
-import java.io.FileReader;
-import java.io.FileWriter;
-import java.io.IOException;
-
-public class LockFile {
-
- private File lockFile;
-
- private static Logger log = LoggerFactory.getLogger(LockFile.class);
-
- public LockFile() {
- this.lockFile = new File("lock");
- }
-
- /** Acquires the lock by checking whether a lock file already exists,
- * and if not, by creating one with the current system time as
- * content. */
- public boolean acquireLock() {
- log.debug("Trying to acquire lock...");
- try {
- if (this.lockFile.exists()) {
- BufferedReader br = new BufferedReader(new FileReader("lock"));
- long runStarted = Long.parseLong(br.readLine());
- br.close();
- if (System.currentTimeMillis() - runStarted
- < 23L * 60L * 60L * 1000L) {
- return false;
- }
- }
- BufferedWriter bw = new BufferedWriter(new FileWriter("lock"));
- bw.append("").append(String.valueOf(System.currentTimeMillis()))
- .append("\n");
- bw.close();
- log.debug("Acquired lock.");
- return true;
- } catch (IOException e) {
- log.warn("Caught exception while trying to acquire "
- + "lock!");
- return false;
- }
- }
-
- /** Releases the lock by deleting the lock file, if present. */
- public void releaseLock() {
- log.debug("Releasing lock...");
- this.lockFile.delete();
- log.debug("Released lock.");
- }
-}
-
diff --git a/src/main/java/org/torproject/metrics/stats/servers/Main.java b/src/main/java/org/torproject/metrics/stats/servers/Main.java
index 4d349bc..1454418 100644
--- a/src/main/java/org/torproject/metrics/stats/servers/Main.java
+++ b/src/main/java/org/torproject/metrics/stats/servers/Main.java
@@ -24,38 +24,15 @@ public class Main {
// Initialize configuration
Configuration config = new Configuration();
- // Use lock file to avoid overlapping runs
- LockFile lf = new LockFile();
- if (!lf.acquireLock()) {
- log.error("Warning: ERNIE is already running or has not exited "
- + "cleanly! Exiting!");
- System.exit(1);
- }
-
// Define stats directory for temporary files
File statsDirectory = new File("stats");
// Import relay descriptors
- if (config.getImportDirectoryArchives()) {
- RelayDescriptorDatabaseImporter rddi =
- config.getWriteRelayDescriptorDatabase()
- || config.getWriteRelayDescriptorsRawFiles()
- ? new RelayDescriptorDatabaseImporter(
- config.getWriteRelayDescriptorDatabase()
- ? config.getRelayDescriptorDatabaseJdbc() : null,
- config.getWriteRelayDescriptorsRawFiles()
- ? config.getRelayDescriptorRawFilesDirectory() : null,
- config.getDirectoryArchivesDirectories(),
- statsDirectory,
- config.getKeepDirectoryArchiveImportHistory()) : null;
- if (null != rddi) {
- rddi.importRelayDescriptors();
- rddi.closeConnection();
- }
- }
-
- // Remove lock file
- lf.releaseLock();
+ RelayDescriptorDatabaseImporter rddi = new RelayDescriptorDatabaseImporter(
+ config.getRelayDescriptorDatabaseJdbc(),
+ config.getDirectoryArchivesDirectories(), statsDirectory);
+ rddi.importRelayDescriptors();
+ rddi.closeConnection();
log.info("Terminating ERNIE.");
}
diff --git a/src/main/java/org/torproject/metrics/stats/servers/RelayDescriptorDatabaseImporter.java b/src/main/java/org/torproject/metrics/stats/servers/RelayDescriptorDatabaseImporter.java
index 2d1ae47..d1ae43c 100644
--- a/src/main/java/org/torproject/metrics/stats/servers/RelayDescriptorDatabaseImporter.java
+++ b/src/main/java/org/torproject/metrics/stats/servers/RelayDescriptorDatabaseImporter.java
@@ -10,15 +10,10 @@ import org.torproject.descriptor.ExtraInfoDescriptor;
import org.torproject.descriptor.NetworkStatusEntry;
import org.torproject.descriptor.RelayNetworkStatusConsensus;
-import org.postgresql.util.PGbytea;
-
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
-import java.io.BufferedWriter;
import java.io.File;
-import java.io.FileWriter;
-import java.io.IOException;
import java.sql.CallableStatement;
import java.sql.Connection;
import java.sql.DriverManager;
@@ -95,21 +90,6 @@ public final class RelayDescriptorDatabaseImporter {
= LoggerFactory.getLogger(RelayDescriptorDatabaseImporter.class);
/**
- * Directory for writing raw import files.
- */
- private String rawFilesDirectory;
-
- /**
- * Raw import file containing status entries.
- */
- private BufferedWriter statusentryOut;
-
- /**
- * Raw import file containing bandwidth histories.
- */
- private BufferedWriter bwhistOut;
-
- /**
* Date format to parse timestamps.
*/
private SimpleDateFormat dateTimeFormat;
@@ -126,30 +106,24 @@ public final class RelayDescriptorDatabaseImporter {
*/
private Set<String> insertedStatusEntries = new HashSet<>();
- private boolean importIntoDatabase;
-
- private boolean writeRawImportFiles;
+ private boolean importIntoDatabase = true;
private List<File> archivesDirectories;
private File statsDirectory;
- private boolean keepImportHistory;
-
/**
* Initialize database importer by connecting to the database and
* preparing statements.
*/
public RelayDescriptorDatabaseImporter(String connectionUrl,
- String rawFilesDirectory, List<File> archivesDirectories,
- File statsDirectory, boolean keepImportHistory) {
+ List<File> archivesDirectories, File statsDirectory) {
if (archivesDirectories == null || statsDirectory == null) {
throw new IllegalArgumentException();
}
this.archivesDirectories = archivesDirectories;
this.statsDirectory = statsDirectory;
- this.keepImportHistory = keepImportHistory;
if (connectionUrl != null) {
try {
@@ -175,18 +149,11 @@ public final class RelayDescriptorDatabaseImporter {
this.psU = conn.prepareStatement("INSERT INTO scheduled_updates "
+ "(date) VALUES (?)");
this.scheduledUpdates = new HashSet<>();
- this.importIntoDatabase = true;
} catch (SQLException e) {
log.warn("Could not connect to database or prepare statements.", e);
}
}
- /* Remember where we want to write raw import files. */
- if (rawFilesDirectory != null) {
- this.rawFilesDirectory = rawFilesDirectory;
- this.writeRawImportFiles = true;
- }
-
/* Initialize date format, so that we can format timestamps. */
this.dateTimeFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
this.dateTimeFormat.setTimeZone(TimeZone.getTimeZone("UTC"));
@@ -278,53 +245,6 @@ public final class RelayDescriptorDatabaseImporter {
this.importIntoDatabase = false;
}
}
- if (this.writeRawImportFiles) {
- try {
- if (this.statusentryOut == null) {
- new File(rawFilesDirectory).mkdirs();
- this.statusentryOut = new BufferedWriter(new FileWriter(
- rawFilesDirectory + "/statusentry.sql"));
- this.statusentryOut.write(" COPY statusentry (validafter, "
- + "nickname, fingerprint, descriptor, published, address, "
- + "orport, dirport, isauthority, isbadExit, "
- + "isbaddirectory, isexit, isfast, isguard, ishsdir, "
- + "isnamed, isstable, isrunning, isunnamed, isvalid, "
- + "isv2dir, isv3dir, version, bandwidth, ports, rawdesc) "
- + "FROM stdin;\n");
- }
- this.statusentryOut.write(
- this.dateTimeFormat.format(validAfter) + "\t" + nickname
- + "\t" + fingerprint.toLowerCase() + "\t"
- + descriptor.toLowerCase() + "\t"
- + this.dateTimeFormat.format(published) + "\t" + address
- + "\t" + orPort + "\t" + dirPort + "\t"
- + (flags.contains("Authority") ? "t" : "f") + "\t"
- + (flags.contains("BadExit") ? "t" : "f") + "\t"
- + (flags.contains("BadDirectory") ? "t" : "f") + "\t"
- + (flags.contains("Exit") ? "t" : "f") + "\t"
- + (flags.contains("Fast") ? "t" : "f") + "\t"
- + (flags.contains("Guard") ? "t" : "f") + "\t"
- + (flags.contains("HSDir") ? "t" : "f") + "\t"
- + (flags.contains("Named") ? "t" : "f") + "\t"
- + (flags.contains("Stable") ? "t" : "f") + "\t"
- + (flags.contains("Running") ? "t" : "f") + "\t"
- + (flags.contains("Unnamed") ? "t" : "f") + "\t"
- + (flags.contains("Valid") ? "t" : "f") + "\t"
- + (flags.contains("V2Dir") ? "t" : "f") + "\t"
- + (flags.contains("V3Dir") ? "t" : "f") + "\t"
- + (version != null ? version : "\\N") + "\t"
- + (bandwidth >= 0 ? bandwidth : "\\N") + "\t"
- + (ports != null ? ports : "\\N") + "\t");
- this.statusentryOut.write(PGbytea.toPGString(rawDescriptor)
- .replaceAll("\\\\", "\\\\\\\\") + "\n");
- } catch (IOException e) {
- log.warn("Could not write network status "
- + "consensus entry to raw database import file. We won't "
- + "make any further attempts to write raw import files in "
- + "this execution.", e);
- this.writeRawImportFiles = false;
- }
- }
}
/**
@@ -552,26 +472,6 @@ public final class RelayDescriptorDatabaseImporter {
this.importIntoDatabase = false;
}
}
- if (this.writeRawImportFiles) {
- try {
- if (this.bwhistOut == null) {
- new File(rawFilesDirectory).mkdirs();
- this.bwhistOut = new BufferedWriter(new FileWriter(
- rawFilesDirectory + "/bwhist.sql"));
- }
- this.bwhistOut.write("SELECT insert_bwhist('" + fingerprint
- + "','" + lastDate + "','" + readIntArray.toString()
- + "','" + writtenIntArray.toString() + "','"
- + dirreadIntArray.toString() + "','"
- + dirwrittenIntArray.toString() + "');\n");
- } catch (IOException e) {
- log.warn("Could not write bandwidth "
- + "history to raw database import file. We won't make "
- + "any further attempts to write raw import files in "
- + "this execution.", e);
- this.writeRawImportFiles = false;
- }
- }
readArray = writtenArray = dirreadArray = dirwrittenArray = null;
}
if (historyLine.equals("EOL")) {
@@ -628,9 +528,7 @@ public final class RelayDescriptorDatabaseImporter {
reader.setMaxDescriptorsInQueue(10);
File historyFile = new File(statsDirectory,
"database-importer-relay-descriptor-history");
- if (keepImportHistory) {
- reader.setHistoryFile(historyFile);
- }
+ reader.setHistoryFile(historyFile);
for (Descriptor descriptor : reader.readDescriptors(
this.archivesDirectories.toArray(
new File[this.archivesDirectories.size()]))) {
@@ -641,9 +539,7 @@ public final class RelayDescriptorDatabaseImporter {
this.addExtraInfoDescriptor((ExtraInfoDescriptor) descriptor);
}
}
- if (keepImportHistory) {
- reader.saveHistoryFile(historyFile);
- }
+ reader.saveHistoryFile(historyFile);
}
log.info("Finished importing relay descriptors.");
@@ -728,20 +624,6 @@ public final class RelayDescriptorDatabaseImporter {
log.warn("Could not close database connection.", e);
}
}
-
- /* Close raw import files. */
- try {
- if (this.statusentryOut != null) {
- this.statusentryOut.write("\\.\n");
- this.statusentryOut.close();
- }
- if (this.bwhistOut != null) {
- this.bwhistOut.write("\\.\n");
- this.bwhistOut.close();
- }
- } catch (IOException e) {
- log.warn("Could not close one or more raw database import files.", e);
- }
}
}
diff --git a/src/main/resources/legacy.config.template b/src/main/resources/legacy.config.template
index 5475c1e..e2e0dac 100644
--- a/src/main/resources/legacy.config.template
+++ b/src/main/resources/legacy.config.template
@@ -1,28 +1,8 @@
-## Import directory archives from disk, if available
-#ImportDirectoryArchives 0
-#
## Relative paths to directories to import directory archives from
#DirectoryArchivesDirectory /srv/metrics.torproject.org/metrics/shared/in/recent/relay-descriptors/consensuses/
#DirectoryArchivesDirectory /srv/metrics.torproject.org/metrics/shared/in/recent/relay-descriptors/server-descriptors/
#DirectoryArchivesDirectory /srv/metrics.torproject.org/metrics/shared/in/recent/relay-descriptors/extra-infos/
#
-## Keep a history of imported directory archive files to know which files
-## have been imported before. This history can be useful when importing
-## from a changing source to avoid importing descriptors over and over
-## again, but it can be confusing to users who don't know about it.
-#KeepDirectoryArchiveImportHistory 0
-#
-## Write relay descriptors to the database
-#WriteRelayDescriptorDatabase 0
-#
## JDBC string for relay descriptor database
#RelayDescriptorDatabaseJDBC jdbc:postgresql://localhost/tordir?user=metrics&password=password
#
-## Write relay descriptors to raw text files for importing them into the
-## database using PostgreSQL's \copy command
-#WriteRelayDescriptorsRawFiles 0
-#
-## Relative path to directory to write raw text files; note that existing
-## files will be overwritten!
-#RelayDescriptorRawFilesDirectory pg-import/
-#
1
0

22 Nov '18
commit c95125a2b329c801db84eea2bdbc4e5118bde9b2
Author: Karsten Loesing <karsten.loesing(a)gmx.net>
Date: Wed Nov 14 10:48:56 2018 +0100
Rename ipv6servers module to servers.
Part of #28116.
---
build.xml | 12 ++++++------
.../metrics/stats/ipv6servers/Configuration.java | 18 ------------------
.../metrics/stats/servers/Configuration.java | 18 ++++++++++++++++++
.../stats/{ipv6servers => servers}/Database.java | 2 +-
.../{ipv6servers => servers}/Ipv6NetworkStatus.java | 2 +-
.../{ipv6servers => servers}/Ipv6ServerDescriptor.java | 2 +-
.../metrics/stats/{ipv6servers => servers}/Main.java | 8 ++++----
.../metrics/stats/{ipv6servers => servers}/Parser.java | 2 +-
.../metrics/stats/{ipv6servers => servers}/Writer.java | 2 +-
.../Ipv6NetworkStatusTest.java | 6 +++---
.../Ipv6ServerDescriptorTest.java | 14 +++++++-------
.../000a7fe20a17bf5d9839a126b1dff43f998aac6f | 0
.../0018ab4f2f28af683d52f06407edbf7ce1bd3b7d | 0
.../0041dbf9fe846f9765882f7dc8332f94b709e35a | 0
.../01003df74972ce952ebfa390f468ef63c50efa25 | 0
.../018c1229d5f56eebfc1d709d4692673d098800e8 | 0
.../2017-12-04-20-00-00-consensus.part | 0
...90507-1D8F3A91C37C5D1C4C19B1AD1D0CFBE8BF72D8E1.part | 0
.../64dd486d89af14027c9a7b4347a94b74dddb5cdb | 0
.../sql/{ipv6servers => servers}/test-ipv6servers.sql | 0
20 files changed, 43 insertions(+), 43 deletions(-)
diff --git a/build.xml b/build.xml
index a391416..89c8b31 100644
--- a/build.xml
+++ b/build.xml
@@ -105,7 +105,7 @@
depends="init"
description="Run all available database pgTAP tests." >
<antcall target="test-db">
- <param name="db2test" value="ipv6servers" />
+ <param name="db2test" value="servers" />
</antcall>
<antcall target="test-db">
<param name="db2test" value="userstats" />
@@ -319,7 +319,7 @@
<antcall target="advbwdist" />
<antcall target="hidserv" />
<antcall target="clients" />
- <antcall target="ipv6servers" />
+ <antcall target="servers" />
<antcall target="webstats" />
<antcall target="totalcw" />
<antcall target="make-data-available" />
@@ -416,11 +416,11 @@
</antcall>
</target>
- <target name="ipv6servers" >
- <property name="module.name" value="ipv6servers" />
+ <target name="servers" >
+ <property name="module.name" value="servers" />
<antcall target="run-java" >
<param name="module.main"
- value="org.torproject.metrics.stats.ipv6servers.Main" />
+ value="org.torproject.metrics.stats.servers.Main" />
</antcall>
</target>
@@ -477,7 +477,7 @@
<fileset dir="${modulebase}/hidserv/stats" includes="hidserv.csv" />
<fileset dir="${modulebase}/clients/stats"
includes="clients*.csv userstats-combined.csv" />
- <fileset dir="${modulebase}/ipv6servers/stats" includes="*.csv" />
+ <fileset dir="${modulebase}/servers/stats" includes="*.csv" />
<fileset dir="${modulebase}/webstats/stats" includes="webstats.csv" />
<fileset dir="${modulebase}/totalcw/stats" includes="totalcw.csv" />
</copy>
diff --git a/src/main/java/org/torproject/metrics/stats/ipv6servers/Configuration.java b/src/main/java/org/torproject/metrics/stats/ipv6servers/Configuration.java
deleted file mode 100644
index d849cb6..0000000
--- a/src/main/java/org/torproject/metrics/stats/ipv6servers/Configuration.java
+++ /dev/null
@@ -1,18 +0,0 @@
-/* Copyright 2017--2018 The Tor Project
- * See LICENSE for licensing information */
-
-package org.torproject.metrics.stats.ipv6servers;
-
-/** Configuration options parsed from Java properties with reasonable hard-coded
- * defaults. */
-class Configuration {
- static String descriptors = System.getProperty("ipv6servers.descriptors",
- "../../shared/in/");
- static String database = System.getProperty("ipv6servers.database",
- "jdbc:postgresql:ipv6servers");
- static String history = System.getProperty("ipv6servers.history",
- "status/read-descriptors");
- static String output = System.getProperty("ipv6servers.output",
- "stats/");
-}
-
diff --git a/src/main/java/org/torproject/metrics/stats/servers/Configuration.java b/src/main/java/org/torproject/metrics/stats/servers/Configuration.java
new file mode 100644
index 0000000..1b7c217
--- /dev/null
+++ b/src/main/java/org/torproject/metrics/stats/servers/Configuration.java
@@ -0,0 +1,18 @@
+/* Copyright 2017--2018 The Tor Project
+ * See LICENSE for licensing information */
+
+package org.torproject.metrics.stats.servers;
+
+/** Configuration options parsed from Java properties with reasonable hard-coded
+ * defaults. */
+class Configuration {
+ static String descriptors = System.getProperty("servers.descriptors",
+ "../../shared/in/");
+ static String database = System.getProperty("servers.database",
+ "jdbc:postgresql:ipv6servers");
+ static String history = System.getProperty("servers.history",
+ "status/read-descriptors");
+ static String output = System.getProperty("servers.output",
+ "stats/");
+}
+
diff --git a/src/main/java/org/torproject/metrics/stats/ipv6servers/Database.java b/src/main/java/org/torproject/metrics/stats/servers/Database.java
similarity index 99%
rename from src/main/java/org/torproject/metrics/stats/ipv6servers/Database.java
rename to src/main/java/org/torproject/metrics/stats/servers/Database.java
index b5efe3e..9c9bda3 100644
--- a/src/main/java/org/torproject/metrics/stats/ipv6servers/Database.java
+++ b/src/main/java/org/torproject/metrics/stats/servers/Database.java
@@ -1,7 +1,7 @@
/* Copyright 2017--2018 The Tor Project
* See LICENSE for licensing information */
-package org.torproject.metrics.stats.ipv6servers;
+package org.torproject.metrics.stats.servers;
import java.sql.Connection;
import java.sql.DriverManager;
diff --git a/src/main/java/org/torproject/metrics/stats/ipv6servers/Ipv6NetworkStatus.java b/src/main/java/org/torproject/metrics/stats/servers/Ipv6NetworkStatus.java
similarity index 98%
rename from src/main/java/org/torproject/metrics/stats/ipv6servers/Ipv6NetworkStatus.java
rename to src/main/java/org/torproject/metrics/stats/servers/Ipv6NetworkStatus.java
index 526adc7..2f40854 100644
--- a/src/main/java/org/torproject/metrics/stats/ipv6servers/Ipv6NetworkStatus.java
+++ b/src/main/java/org/torproject/metrics/stats/servers/Ipv6NetworkStatus.java
@@ -1,7 +1,7 @@
/* Copyright 2017--2018 The Tor Project
* See LICENSE for licensing information */
-package org.torproject.metrics.stats.ipv6servers;
+package org.torproject.metrics.stats.servers;
import java.time.LocalDateTime;
import java.util.ArrayList;
diff --git a/src/main/java/org/torproject/metrics/stats/ipv6servers/Ipv6ServerDescriptor.java b/src/main/java/org/torproject/metrics/stats/servers/Ipv6ServerDescriptor.java
similarity index 95%
rename from src/main/java/org/torproject/metrics/stats/ipv6servers/Ipv6ServerDescriptor.java
rename to src/main/java/org/torproject/metrics/stats/servers/Ipv6ServerDescriptor.java
index 387024b..4450a3c 100644
--- a/src/main/java/org/torproject/metrics/stats/ipv6servers/Ipv6ServerDescriptor.java
+++ b/src/main/java/org/torproject/metrics/stats/servers/Ipv6ServerDescriptor.java
@@ -1,7 +1,7 @@
/* Copyright 2017--2018 The Tor Project
* See LICENSE for licensing information */
-package org.torproject.metrics.stats.ipv6servers;
+package org.torproject.metrics.stats.servers;
/** Data object holding all relevant parts parsed from a (relay or bridge)
* server descriptor. */
diff --git a/src/main/java/org/torproject/metrics/stats/ipv6servers/Main.java b/src/main/java/org/torproject/metrics/stats/servers/Main.java
similarity index 95%
rename from src/main/java/org/torproject/metrics/stats/ipv6servers/Main.java
rename to src/main/java/org/torproject/metrics/stats/servers/Main.java
index d322a2e..54f44d0 100644
--- a/src/main/java/org/torproject/metrics/stats/ipv6servers/Main.java
+++ b/src/main/java/org/torproject/metrics/stats/servers/Main.java
@@ -1,7 +1,7 @@
/* Copyright 2017--2018 The Tor Project
* See LICENSE for licensing information */
-package org.torproject.metrics.stats.ipv6servers;
+package org.torproject.metrics.stats.servers;
import org.torproject.descriptor.BridgeNetworkStatus;
import org.torproject.descriptor.Descriptor;
@@ -18,7 +18,7 @@ import java.nio.file.Paths;
import java.sql.SQLException;
import java.util.Arrays;
-/** Main class of the ipv6servers module that imports relevant parts from server
+/** Main class of the servers module that imports relevant parts from server
* descriptors and network statuses into a database, and exports aggregate
* statistics to CSV files. */
public class Main {
@@ -38,7 +38,7 @@ public class Main {
/** Run the module. */
public static void main(String[] args) throws Exception {
- log.info("Starting ipv6servers module.");
+ log.info("Starting servers module.");
log.info("Reading descriptors and inserting relevant parts into the "
+ "database.");
@@ -99,7 +99,7 @@ public class Main {
new Writer().write(Paths.get(Configuration.output, "platforms.csv"),
database.queryPlatforms());
- log.info("Terminating ipv6servers module.");
+ log.info("Terminating servers module.");
} catch (SQLException sqle) {
log.error("Cannot recover from SQL exception while querying. Not writing "
+ "output file.", sqle);
diff --git a/src/main/java/org/torproject/metrics/stats/ipv6servers/Parser.java b/src/main/java/org/torproject/metrics/stats/servers/Parser.java
similarity index 99%
rename from src/main/java/org/torproject/metrics/stats/ipv6servers/Parser.java
rename to src/main/java/org/torproject/metrics/stats/servers/Parser.java
index 9d8b71a..65055f9 100644
--- a/src/main/java/org/torproject/metrics/stats/ipv6servers/Parser.java
+++ b/src/main/java/org/torproject/metrics/stats/servers/Parser.java
@@ -1,7 +1,7 @@
/* Copyright 2017--2018 The Tor Project
* See LICENSE for licensing information */
-package org.torproject.metrics.stats.ipv6servers;
+package org.torproject.metrics.stats.servers;
import org.torproject.descriptor.BridgeNetworkStatus;
import org.torproject.descriptor.NetworkStatusEntry;
diff --git a/src/main/java/org/torproject/metrics/stats/ipv6servers/Writer.java b/src/main/java/org/torproject/metrics/stats/servers/Writer.java
similarity index 96%
rename from src/main/java/org/torproject/metrics/stats/ipv6servers/Writer.java
rename to src/main/java/org/torproject/metrics/stats/servers/Writer.java
index 4e2893c..5ff1858 100644
--- a/src/main/java/org/torproject/metrics/stats/ipv6servers/Writer.java
+++ b/src/main/java/org/torproject/metrics/stats/servers/Writer.java
@@ -1,7 +1,7 @@
/* Copyright 2017--2018 The Tor Project
* See LICENSE for licensing information */
-package org.torproject.metrics.stats.ipv6servers;
+package org.torproject.metrics.stats.servers;
import java.io.File;
import java.io.IOException;
diff --git a/src/test/java/org/torproject/metrics/stats/ipv6servers/Ipv6NetworkStatusTest.java b/src/test/java/org/torproject/metrics/stats/servers/Ipv6NetworkStatusTest.java
similarity index 96%
rename from src/test/java/org/torproject/metrics/stats/ipv6servers/Ipv6NetworkStatusTest.java
rename to src/test/java/org/torproject/metrics/stats/servers/Ipv6NetworkStatusTest.java
index 2f3ca42..4ad0b9c 100644
--- a/src/test/java/org/torproject/metrics/stats/ipv6servers/Ipv6NetworkStatusTest.java
+++ b/src/test/java/org/torproject/metrics/stats/servers/Ipv6NetworkStatusTest.java
@@ -1,7 +1,7 @@
/* Copyright 2017--2018 The Tor Project
* See LICENSE for licensing information */
-package org.torproject.metrics.stats.ipv6servers;
+package org.torproject.metrics.stats.servers;
import static junit.framework.TestCase.assertEquals;
import static junit.framework.TestCase.fail;
@@ -33,8 +33,8 @@ public class Ipv6NetworkStatusTest {
/** Provide test data. */
@Parameters
public static Collection<Object[]> data() {
- String relayFileName = "ipv6servers/2017-12-04-20-00-00-consensus.part";
- String bridgeFileName = "ipv6servers/"
+ String relayFileName = "servers/2017-12-04-20-00-00-consensus.part";
+ String bridgeFileName = "servers/"
+ "20171204-190507-1D8F3A91C37C5D1C4C19B1AD1D0CFBE8BF72D8E1.part";
return Arrays.asList(new Object[][] {
{ "Relay status without Guard or Exit flag and without IPv6 address. ",
diff --git a/src/test/java/org/torproject/metrics/stats/ipv6servers/Ipv6ServerDescriptorTest.java b/src/test/java/org/torproject/metrics/stats/servers/Ipv6ServerDescriptorTest.java
similarity index 86%
rename from src/test/java/org/torproject/metrics/stats/ipv6servers/Ipv6ServerDescriptorTest.java
rename to src/test/java/org/torproject/metrics/stats/servers/Ipv6ServerDescriptorTest.java
index 7b63c1e..54a6941 100644
--- a/src/test/java/org/torproject/metrics/stats/ipv6servers/Ipv6ServerDescriptorTest.java
+++ b/src/test/java/org/torproject/metrics/stats/servers/Ipv6ServerDescriptorTest.java
@@ -1,7 +1,7 @@
/* Copyright 2017--2018 The Tor Project
* See LICENSE for licensing information */
-package org.torproject.metrics.stats.ipv6servers;
+package org.torproject.metrics.stats.servers;
import static junit.framework.TestCase.assertEquals;
import static org.junit.Assert.assertNotNull;
@@ -32,22 +32,22 @@ public class Ipv6ServerDescriptorTest {
public static Collection<Object[]> data() {
return Arrays.asList(new Object[][] {
{ "Relay server descriptor without or-address or ipv6-policy line.",
- "ipv6servers/0018ab4f2f28af683d52f06407edbf7ce1bd3b7d",
+ "servers/0018ab4f2f28af683d52f06407edbf7ce1bd3b7d",
819200, false, false },
{ "Relay server descriptor with or-address and ipv6-policy line.",
- "ipv6servers/01003df74972ce952ebfa390f468ef63c50efa25",
+ "servers/01003df74972ce952ebfa390f468ef63c50efa25",
6576128, true, true },
{ "Relay server descriptor with or-address line only.",
- "ipv6servers/018c1229d5f56eebfc1d709d4692673d098800e8",
+ "servers/018c1229d5f56eebfc1d709d4692673d098800e8",
0, true, false },
{ "Bridge server descriptor without or-address or ipv6-policy line.",
- "ipv6servers/000a7fe20a17bf5d9839a126b1dff43f998aac6f",
+ "servers/000a7fe20a17bf5d9839a126b1dff43f998aac6f",
0, false, false },
{ "Bridge server descriptor with or-address line.",
- "ipv6servers/0041dbf9fe846f9765882f7dc8332f94b709e35a",
+ "servers/0041dbf9fe846f9765882f7dc8332f94b709e35a",
0, true, false },
{ "Bridge server descriptor with (ignored) ipv6-policy accept line.",
- "ipv6servers/64dd486d89af14027c9a7b4347a94b74dddb5cdb",
+ "servers/64dd486d89af14027c9a7b4347a94b74dddb5cdb",
0, false, false }
});
}
diff --git a/src/test/resources/ipv6servers/000a7fe20a17bf5d9839a126b1dff43f998aac6f b/src/test/resources/servers/000a7fe20a17bf5d9839a126b1dff43f998aac6f
similarity index 100%
rename from src/test/resources/ipv6servers/000a7fe20a17bf5d9839a126b1dff43f998aac6f
rename to src/test/resources/servers/000a7fe20a17bf5d9839a126b1dff43f998aac6f
diff --git a/src/test/resources/ipv6servers/0018ab4f2f28af683d52f06407edbf7ce1bd3b7d b/src/test/resources/servers/0018ab4f2f28af683d52f06407edbf7ce1bd3b7d
similarity index 100%
rename from src/test/resources/ipv6servers/0018ab4f2f28af683d52f06407edbf7ce1bd3b7d
rename to src/test/resources/servers/0018ab4f2f28af683d52f06407edbf7ce1bd3b7d
diff --git a/src/test/resources/ipv6servers/0041dbf9fe846f9765882f7dc8332f94b709e35a b/src/test/resources/servers/0041dbf9fe846f9765882f7dc8332f94b709e35a
similarity index 100%
rename from src/test/resources/ipv6servers/0041dbf9fe846f9765882f7dc8332f94b709e35a
rename to src/test/resources/servers/0041dbf9fe846f9765882f7dc8332f94b709e35a
diff --git a/src/test/resources/ipv6servers/01003df74972ce952ebfa390f468ef63c50efa25 b/src/test/resources/servers/01003df74972ce952ebfa390f468ef63c50efa25
similarity index 100%
rename from src/test/resources/ipv6servers/01003df74972ce952ebfa390f468ef63c50efa25
rename to src/test/resources/servers/01003df74972ce952ebfa390f468ef63c50efa25
diff --git a/src/test/resources/ipv6servers/018c1229d5f56eebfc1d709d4692673d098800e8 b/src/test/resources/servers/018c1229d5f56eebfc1d709d4692673d098800e8
similarity index 100%
rename from src/test/resources/ipv6servers/018c1229d5f56eebfc1d709d4692673d098800e8
rename to src/test/resources/servers/018c1229d5f56eebfc1d709d4692673d098800e8
diff --git a/src/test/resources/ipv6servers/2017-12-04-20-00-00-consensus.part b/src/test/resources/servers/2017-12-04-20-00-00-consensus.part
similarity index 100%
rename from src/test/resources/ipv6servers/2017-12-04-20-00-00-consensus.part
rename to src/test/resources/servers/2017-12-04-20-00-00-consensus.part
diff --git a/src/test/resources/ipv6servers/20171204-190507-1D8F3A91C37C5D1C4C19B1AD1D0CFBE8BF72D8E1.part b/src/test/resources/servers/20171204-190507-1D8F3A91C37C5D1C4C19B1AD1D0CFBE8BF72D8E1.part
similarity index 100%
rename from src/test/resources/ipv6servers/20171204-190507-1D8F3A91C37C5D1C4C19B1AD1D0CFBE8BF72D8E1.part
rename to src/test/resources/servers/20171204-190507-1D8F3A91C37C5D1C4C19B1AD1D0CFBE8BF72D8E1.part
diff --git a/src/test/resources/ipv6servers/64dd486d89af14027c9a7b4347a94b74dddb5cdb b/src/test/resources/servers/64dd486d89af14027c9a7b4347a94b74dddb5cdb
similarity index 100%
rename from src/test/resources/ipv6servers/64dd486d89af14027c9a7b4347a94b74dddb5cdb
rename to src/test/resources/servers/64dd486d89af14027c9a7b4347a94b74dddb5cdb
diff --git a/src/test/sql/ipv6servers/test-ipv6servers.sql b/src/test/sql/servers/test-ipv6servers.sql
similarity index 100%
rename from src/test/sql/ipv6servers/test-ipv6servers.sql
rename to src/test/sql/servers/test-ipv6servers.sql
1
0

[translation/whisperback] Update translations for whisperback
by translation@torproject.org 22 Nov '18
by translation@torproject.org 22 Nov '18
22 Nov '18
commit ddba33a287a8881159e155b9168d061ca45fb636
Author: Translation commit bot <translation(a)torproject.org>
Date: Thu Nov 22 05:49:00 2018 +0000
Update translations for whisperback
---
sv/sv.po | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/sv/sv.po b/sv/sv.po
index c34bd0829..878889ae8 100644
--- a/sv/sv.po
+++ b/sv/sv.po
@@ -14,7 +14,7 @@ msgstr ""
"Project-Id-Version: Tor Project\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2018-06-11 17:17+0200\n"
-"PO-Revision-Date: 2018-11-21 16:28+0000\n"
+"PO-Revision-Date: 2018-11-22 05:26+0000\n"
"Last-Translator: Jonatan Nyberg\n"
"Language-Team: Swedish (http://www.transifex.com/otf/torproject/language/sv/)\n"
"MIME-Version: 1.0\n"
@@ -27,7 +27,7 @@ msgstr ""
#: ../whisperBack/whisperback.py:56
#, python-format
msgid "Invalid contact email: %s"
-msgstr "Ogiltig e-postadress: %s"
+msgstr "Ogiltig kontakt e-post: %s"
#: ../whisperBack/whisperback.py:74
#, python-format
@@ -36,7 +36,7 @@ msgstr "Ogiltig kontakt OpenPGP-nyckel: %s"
#: ../whisperBack/whisperback.py:76
msgid "Invalid contact OpenPGP public key block"
-msgstr "Ogiltig OpenPGP publikt nyckel block"
+msgstr "Ogiltig OpenPGP publikt nyckelblock"
#: ../whisperBack/exceptions.py:41
#, python-format
@@ -63,7 +63,7 @@ msgstr "Önskat resultat"
#: ../whisperBack/gui.py:130
msgid "Unable to load a valid configuration."
-msgstr "Kunde inte läsa in en korrekt konfiguration."
+msgstr "Kunde inte läsa in en giltig konfiguration."
#: ../whisperBack/gui.py:166
msgid "Sending mail..."
@@ -84,15 +84,15 @@ msgstr "Kontaktens e-postadress verkar inte giltig."
#: ../whisperBack/gui.py:202
msgid "Unable to send the mail: SMTP error."
-msgstr "Kunde inte skicka e-post: SMTP fel."
+msgstr "Det gick inte att skicka e-post: SMTP-fel."
#: ../whisperBack/gui.py:204
msgid "Unable to connect to the server."
-msgstr "Kunde inte koppla upp till servern."
+msgstr "Det gick inte att ansluta till servern."
#: ../whisperBack/gui.py:206
msgid "Unable to create or to send the mail."
-msgstr "Kunde inte skapa eller skicka e-postmeddelandet."
+msgstr "Det går inte att skapa eller skicka e-post."
#: ../whisperBack/gui.py:209
msgid ""
1
0

22 Nov '18
commit a885242c64eb82c2029229ea83fd591e52d8e864
Author: Translation commit bot <translation(a)torproject.org>
Date: Thu Nov 22 05:48:36 2018 +0000
Update translations for torcheck
---
sv/torcheck.po | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/sv/torcheck.po b/sv/torcheck.po
index 3755ab3b9..fe4d3b16d 100644
--- a/sv/torcheck.po
+++ b/sv/torcheck.po
@@ -17,7 +17,7 @@ msgid ""
msgstr ""
"Project-Id-Version: Tor Project\n"
"POT-Creation-Date: 2012-02-16 20:28+PDT\n"
-"PO-Revision-Date: 2018-11-21 16:34+0000\n"
+"PO-Revision-Date: 2018-11-22 05:23+0000\n"
"Last-Translator: Jonatan Nyberg\n"
"Language-Team: Swedish (http://www.transifex.com/otf/torproject/language/sv/)\n"
"MIME-Version: 1.0\n"
@@ -34,7 +34,7 @@ msgid ""
"Please refer to the <a href=\"https://www.torproject.org/\">Tor website</a> "
"for further information about using Tor safely. You are now free to browse "
"the Internet anonymously."
-msgstr "Besök <a href=\"https://www.torproject.org/\">Tor-webbplatsen</a> för mer information om hur du använder Tor säkert. Du kan nu surfa på nätet anonymt."
+msgstr "Vänligen besök <a href=\"https://www.torproject.org/\">Tor-webbplatsen</a> för mer information om hur du använder Tor på ett säkert sätt. Du kan nu surfa på nätet anonymt."
msgid "There is a security update available for Tor Browser."
msgstr "Det finns en tillgänglig säkerhets uppdatering för Tor Browser."
@@ -42,7 +42,7 @@ msgstr "Det finns en tillgänglig säkerhets uppdatering för Tor Browser."
msgid ""
"<a href=\"https://www.torproject.org/download/download-easy.html\">Click "
"here to go to the download page</a>"
-msgstr "<a href=\"https://www.torproject.org/download/download-easy.html\">Klicka här för att gå till nedladdningssidan</a>"
+msgstr "<a href=\"https://www.torproject.org/download/download-easy.html\">Klicka här för att gå till hämtningssidan</a>"
msgid "Sorry. You are not using Tor."
msgstr "Tyvärr. Du använder inte Tor."
@@ -60,7 +60,7 @@ msgstr "Tyvärr, antingen misslyckades din förfrågan eller så inkom ett ovän
msgid ""
"A temporary service outage prevents us from determining if your source IP "
"address is a <a href=\"https://www.torproject.org/\">Tor</a> node."
-msgstr "Ett temporärt fel på tjänsten förhindrar oss från att avgöra om den IP-adress din trafik kommer ifrån är en <a href=\"https://www.torproject.org/\">Tor</a>-nod."
+msgstr "En tillfällig serviceavbrott förhindrar oss från att avgöra om den IP-adress din trafik kommer ifrån är en <a href=\"https://www.torproject.org/\">Tor</a>-nod."
msgid "Your IP address appears to be: "
msgstr "Din IP-adress ser ut att vara:"
1
0

[translation/tbmanual-contentspot] Update translations for tbmanual-contentspot
by translation@torproject.org 21 Nov '18
by translation@torproject.org 21 Nov '18
21 Nov '18
commit 02f000790892de6d75f1d3e5b2ecbc75c014d96a
Author: Translation commit bot <translation(a)torproject.org>
Date: Wed Nov 21 23:46:54 2018 +0000
Update translations for tbmanual-contentspot
---
contents+el.po | 20 +++++++++++++++-----
1 file changed, 15 insertions(+), 5 deletions(-)
diff --git a/contents+el.po b/contents+el.po
index 931da6e07..6381e167e 100644
--- a/contents+el.po
+++ b/contents+el.po
@@ -86,6 +86,10 @@ msgid ""
" valid for a single session (until Tor Browser is exited or a <a href"
"=\"/managing-identities/#new-identity\">New Identity</a> is requested)."
msgstr ""
+"Από προεπιλογή, ο Tor Browser δεν κρατά κανένα ιστορικό περιήγησης. Τα "
+"cookies είναι έγκυρα μόνο για μια συνεδρία (μέχρι να κλείσει ο Tor Browser ή"
+" να ζητηθεί μια <a href=\"/managing-identities/#new-identity\">Νέα "
+"Ταυτότητα</a>)."
#: https//tb-manual.torproject.org/en-US/about/
#: (content/about/contents+en-US.lrtopic.body)
@@ -174,7 +178,7 @@ msgstr ""
#: https//tb-manual.torproject.org/en-US/downloading/
#: (content/downloading/contents+en-US.lrtopic.body)
msgid "##### Mirrors"
-msgstr ""
+msgstr "##### Mirrors"
#: https//tb-manual.torproject.org/en-US/downloading/
#: (content/downloading/contents+en-US.lrtopic.body)
@@ -184,6 +188,10 @@ msgid ""
"mirrors, either through [EFF](https://tor.eff.org) or [Calyx "
"Institute](https://tor.calyxinstitute.org)."
msgstr ""
+"Αν δεν μπορείτε να κατεβάσετε το Tor Browser από τον επίσημο ιστότοπο του "
+"Tor Project, μπορείτε να δοκιμάσετε να το κατεβάσετε από έναν από τα επίσημα"
+" mirror μας είτε μέσω του [EFF] (https://tor.eff.org) ή του [Calyx "
+"Institute] (https://tor.calyxinstitute.org)"
#: https//tb-manual.torproject.org/en-US/downloading/
#: (content/downloading/contents+en-US.lrtopic.body)
@@ -393,7 +401,7 @@ msgstr "<img class=\"col-md-6\" src=\"../../static/images/proxy.png\">"
#: https//tb-manual.torproject.org/en-US/running-tor-browser/
#: (content/running-tor-browser/contents+en-US.lrtopic.seo_slug)
msgid "running-tor-browser"
-msgstr ""
+msgstr "running-tor-browser"
#: https//tb-manual.torproject.org/en-US/bridges/
#: (content/bridges/contents+en-US.lrtopic.title)
@@ -504,6 +512,8 @@ msgid ""
"<img class=\"col-md-6\" src=\"../../static/images/tor-launcher-custom-"
"bridges.png\">"
msgstr ""
+"<img class=\"col-md-6\" src=\"../../static/images/tor-launcher-custom-"
+"bridges.png\">"
#: https//tb-manual.torproject.org/en-US/bridges/
#: (content/bridges/contents+en-US.lrtopic.body)
@@ -1836,7 +1846,7 @@ msgstr ""
#: https//tb-manual.torproject.org/en-US/plugins/
#: (content/plugins/contents+en-US.lrtopic.seo_slug)
msgid "plugins"
-msgstr ""
+msgstr "πρόσθετα"
#: https//tb-manual.torproject.org/en-US/troubleshooting/
#: (content/troubleshooting/contents+en-US.lrtopic.title)
@@ -1956,7 +1966,7 @@ msgstr ""
#: https//tb-manual.torproject.org/en-US/troubleshooting/
#: (content/troubleshooting/contents+en-US.lrtopic.seo_slug)
msgid "troubleshooting"
-msgstr ""
+msgstr "αντιμετώπιση προβλημάτων"
#: https//tb-manual.torproject.org/en-US/known-issues/
#: (content/known-issues/contents+en-US.lrtopic.title)
@@ -2068,7 +2078,7 @@ msgstr ""
#: https//tb-manual.torproject.org/en-US/uninstalling/
#: (content/uninstalling/contents+en-US.lrtopic.seo_slug)
msgid "uninstalling"
-msgstr ""
+msgstr "απεγκατάσταση"
#: https//tb-manual.torproject.org/en-US/uninstalling/
#: (content/uninstalling/contents+en-US.lrtopic.title)
1
0