| 13 | | The aims of this test are: |
| 14 | | * Check if there is a relation between the bandwidth and the account free space. |
| 15 | | * Check if the provider limits the bandwidth when the account reaches an amount of GB transfered. |
| | 13 | The objective of this workload was twofold: Measuring the maximum up/down transfer speed |
| | 14 | of operations and detecting correlations between the transfer |
| | 15 | speed and the load of an account. Intuitively, the first objective |
| | 16 | was achieved by alternating upload and download operations, |
| | 17 | since the provider only needed to handle one operation per |
| | 18 | account at a time. We achieved the second point by acquiring |
| | 19 | information about the load of an account in each API call. |
| | 20 | The execution of this workload was continuously performed |
| | 21 | at each node as follows: First, a node created synthetic files of |
| | 22 | a size chosen at random from the aforementioned set of sizes. |
| | 23 | That node uploaded files until the capacity of the account was |
| | 24 | full. At this point, that node downloaded all the files also in |
| | 25 | random order. After each download, the file was deleted. |
| 23 | | == Service Variability == |
| | 34 | == Service Variability Workload == |
| | 35 | |
| | 36 | This workload maintained in |
| | 37 | every node a nearly continuous upload and download transfer |
| | 38 | flow to analyze the performance variability of the service |
| | 39 | over time. This workload provides an appropriate substrate |
| | 40 | to elaborate a time-series analysis of these services. |
| | 41 | The procedure was as follows: The upload process first |
| | 42 | created files corresponding to each defined file size which |
| | 43 | were labeled as “reserved”, since they were not deleted from |
| | 44 | the account. By doing this we assured that the download |
| | 45 | process was never interrupted, since at least the reserved files |
| | 46 | were always ready for being downloaded. Then, the upload |
| | 47 | process started uploading synthetic random files until the |
| | 48 | account was full. When the account was full, this process |
| | 49 | deleted all files with the exception of the reserved ones to |
| | 50 | continue uploading files. In parallel, the download process was |
| | 51 | continuously downloading random files stored in the account. |
| | 60 | |
| | 61 | === Deployment === |
| | 62 | |
| | 63 | Finally, we executed the experiments in different ways |
| | 64 | depending on the chosen platform. In the case of PlanetLab, |
| | 65 | we employed the same machines in each test, and therefore, we |
| | 66 | needed to sequentially execute all the combinations of workloads |
| | 67 | and providers. This minimized the impact of hardware |
| | 68 | and network heterogeneity, since all the experiments were |
| | 69 | executed in the same conditions. On the contrary, in our labs |
| | 70 | we executed in parallel a certain workload for all providers |
| | 71 | (i.e. assigning 10 machines per provider). This provided two |
| | 72 | main advantages: The measurement process was substantially |
| | 73 | faster, and fair comparison of the three services was possible |
| | 74 | for the same period of time. |