There are a million acronyms out there to help students remember the layers of the Open Systems Interconnection (OSI) model. Why offer another? Well, this one has worked best for me.
As a reminder, the OSI Model Layers are as follows:
Layer 7: Application
Layer 6: Presentation
Layer 5: Session
Layer 4: Transport
Layer 3: Network
Layer 2: Data Link
Layer 1: Physical
AP(p)S Transport Network Data Physical(ly)
I like this because it helps me remember several important pieces of information in one tidy little phrase.
The Transport, Network, Data (link), and Physical layers are spelled right out.
Grouping Application, Presentation, and Session together into the acronym AP(p)S helps group the upper layers together, and it conveys that these layers are of primary concern for the application-level data.
The sequence of the operations in terms of wrapping the layers starts with the top (layer 7: App.) All to often, I hear people say they are confused about the layers because layer one seems like it should be wrapped by the other layers. This mnemonic helps avoid that issue. Additionally, the layers read naturally from top to bottom when I see them represented, and this order makes writing them down more natural.
Finally, the phrase reminds me that these abstractions all rest on a physical communication layer (i.e., when troubleshooting, start there first!)
If this helps you, great, and if not, I hope you find a mnemonic that works well for you 🙂
Several months ago I wanted to estimate the mean of a value for users of an online app as quickly as possible with as few samples as possible. Every data point required scraping a webpage, and this proved timely AND costly in terms of system resources. Additionally, if I triggered too much traffic with my queries, the host would temporarily block the IP of my server.
Having theorized that the distribution was normal, I considered several more formal approaches (e.g., estimate power and then choose the appropriate sample size; sequential sampling; etc.) However, I was curious if I could develop an iterative approach that would both satisfy my precision requirements and help me get the data I needed as efficiently as possible given the cost of each web-scraping request.
Big M: A Generally Precise Estimate Of The Mean
While I didn’t need to publish the means I was estimating in scholastic journals, I wanted to ensure that the estimates were provably reliable in the general sense. That is to say, I wanted confidence intervals “in the nineties” or “in the eighties.” I wasn’t trying to disprove any formal null hypotheses, but I did want good data. I was training machine learning models for a class, and I wanted the most predictive models possible given my resource limitations.
I decided to play around with an iterative approach to generating a sample of large enough size to achieve the general degree of precision that I desired (even the thought of this would probably make my grad school stats professors throw up in their mouths.) This type of approach is a “no no” in statistics books, as you can generally grow a sample until you temporarily get what you want. You usually want to make informed a priori decisions about your research and then follow them to find your robust results.
However, I’d played around with Monte Carlo simulations a couple decades ago (I’m so old), and I always found it interesting how well various methods generally held up even in the face of violations of assumptions. Additionally, the ability of machine learning models to consistently converge on valid findings even in the face of crude hyperparamters has taught me to put things to the test before discounting them.
I set out to make a (relatively) simple algorithm for estimating the mean of the population to a general (e.g., “around ninety out of one hundred samples will contain the mean of the population with the given error constraint.”) I called this estimate Big M because, well, you know, D. Knuth is awesome, and Big O conveys a form of general precision that I wanted to embrace. If I’m working with big data, I don’t need a scientifically chosen set of samples that should guarantee a CI of 95%, I just need to know that I’m generally in the 90s.
The Big M Algorithm Overview
After trying out various forms of the algorithm, I developed the following approach, and a quick example Jupyter notebook is linked to containing code and several random results. Essentially, the code integrates the following algorithm.
Select initial sample.
Compute the confidence interval (CI) and mean.
Check if the acceptable error (e.g., off by no more than 2 inches) is outside the confidence interval.
If the acceptable error is outside the CI, increment the count of valid findings.
If the acceptable error is inside the CI, start the count over at zero.
Check if the continuous count has reached its threshold.
If the continuous count has reached its threshold, exit the function with the mean estimate.
If the continuous count has not reached its threshold, add one more observation to the sample and return to step 2.
There are some other details of note. The continuous count formulation is based on the ratio of the population size to the initial sample size and the confidence percentage chosen. Additionally, there is a maximum sample size that is formulated automatically by the size of the population and the initial sample size, though this parameter can be set manually.
The Big M Jupyter Notebook Example Code (Embedded as HTML)
I’ve linked to a demonstration of the code to implement the algorithm. Much of this is arbitrary, and I’m sure you could refactor this so it performs better. I’ve since developed a server-side implementation that’s quite different from this in a language other than Python, but this should (barring bugs, which obviously will be present) capture the general thrust of the algorithm and show general results (you can just rerun the Notebook to see new results.)
There are several situations in which one may want to monitor network traffic on an iOS device (e.g., ensuring there is no unexpected network traffic, identifying the APIs utilized by various apps, etc.) Let’s look at one possible option to accomplish this. From iOS 5 on, we can use Remote Virtual Interface (RVI) to add a network interface to your macOS device that contains the packet stream from an iOS device.
Install Xcode From The App Store
First, ensure that you’ve installed Xcode from the App Store on the Mac you’ll be using. It’s free, and it’s a straight-forward install.
Install Xcode Command Line Tools
Next, make sure you have the command line tools for Xcode installed on your system. You can type the following command to check if they are installed:
$ xcode-select --version
If you don’t see any version information and you get a “command not found” type of error, you can use the following command to install the tools:
$ xcode-select --install
Of note, don’t try to use the same command above to update your installation of the command tools, just let Apple prompt you for an update (or, if you have automatic updates enabled, updates should happen without you needing to do anything.)
Connect Your iOS Device To Your Mac Computer
Then, connect your iOS device with your Mac computer using whatever wired connection is required (for my iPhone 8 and my iMac, I’m using a USB-to-Lightning cable.) Once connected, you just need to have both devices turned on so they can talk to each other (you may have to enter the passcode for your iOS device to unlock it.)
Start Xcode And Find Your UDID
Next, we have to locate the Unique Device Identifier (UDID) for your iOS device. The easiest way to do this (and have something you can copy into your command for the next step) is to use Xcode. After starting Xcode, you can navigate to the Window menu and then select Devices and Simulators. That will bring up a new window, then you can select the Devices tab, which should reveal detailed information about your iOS device. For our purposes, we need the value after the Identifier label (blurred out in my image below), which is the UDID for the device.
Find The “rvictl” Command On Your Mac
Now we need to open the terminal again. First, we have to find where the RVI command is located on your version of macOS. The find command can do this nicely, and we’ll enhance our command so we don’t see hundreds of permission denied messages.
$ find / -name "rvictl" 2>/dev/null
The output should reveal the location of the command. On my iMac running Catalina, the location is /Library/Apple/usr/bin, but make sure you check your system for the precise location.
Next, change to the directory of the rvictl binary and then run the command.
$ cd /Path/On/Your/System
Run The “rvictl” Command To Add Your iOS Device As A Network Interface
Finally, we can run the rvictl command and pass in the UDID we found earlier for our iOS device to start up a new network interface that will allow us to monitor the network traffic on the device using our Mac computer.
$ rvictl -s the-udid-number-of-your-ios-device
Test The Network Interface With tcpdump
Now that the network interface has been configured on your Mac for your iOS device (usually called rvi0), let’s test it to ensure that it’s working. Try using tcpdump to view HTTP activity on your iOS device and then visit a webpage on your phone that is using HTTP (not HTTPS.)
$ tcpdump -i rvi0 port http
You should now have the ability to configure your Mac computer to monitor network traffic on your iOS device. There are pros and cons to this particular approach. On the positive side, it is relatively easy if you’re using a Mac, unencrypted traffic is easily viewed, and the required applications/tools are few. However, if you you don’t own a Mac computer, or if you need to view encrypted traffic (e.g., HTTPS), there are better approaches. I’ll cover other monitoring options in the future that address these issues.
You might wonder why we waited for the trademark to be finalized before investing the time to create more resources. Well, I had a bad experience where I’d worked hard to build up the presence of an app in the Apple App Store. Then, someone created an app with the same name… except that they added “HD.” Seriously, they just called their app “name-of-my-app HD.” The similarly-name “HD” product seriously undermined my brand and advertising. Lesson learned!
Charts are fantastic! And, on the rare occasion that you find an old chart that possesses the charm of a previous era whilst maintaining valuable insights for today’s learners, you’ve found a true treasure.
If you’re into scientific antiques, you have to examine the details in this 1944 poster from the W.M Welch Scientific Company: “Chart of Electromagnetic Radiations.” It was found tucked away in the back of an unused office years ago, but now hangs framed in a high-traffic hallway populated by Lawrence Livermore engineers.
What a marvelous poster. Certainly, this was the poster for which I’d been waiting. Beyond the beautiful presentation, someone had also worked up a nice writeup of the chart’s provenance, which only added to its allure. Sure, Edward Tufte may not have installed this particular chart in his house, but even he would have to concede the impressive level of information density achieved.
Really, it looked like all systems were go. It’s licensed under Creative Commons 2.0, so getting this beautiful chart printed as a large-size poster would be a snap. All I had to do was go to FedEx Office to get some pricing and then quick talk to my beautiful wife. Easy peasy.
Although FedEx had some reasonable pricing, apparently the discussion with the wife posed a greater stumbling block than I had anticipated. Couldn’t she see the beauty of this poster? Why couldn’t we put this baby up on one of our walls as big as it could be printed?
After much bargaining, she agreed to let me put up a poster if I improved the appearance of the chart (it looked old, worn, and dusty to her), and we limited the largest side to 36 inches.
So, after putting in some time in Photoshop, I have a “dusted-off” version of the chart ready for printing. I tried to limit the edits to repairs related to color fading, and some extreme rips, as I thoroughly appreciate the aged appearance of the chart.
Following the license of Lawrence Livermore National Laboratory’s original image upload, this updated version is also licensed by the Creative Commons 2.0 license.