April 2010 Archives

Broadcasting, Russian Style

Why I'm going to jail for being excessively innovative.

I've been reading web pages about some RF devices that are interesting. The Russians bugged The Great Seal of The United States at the Moscow embassy some years ago. You can read about it here, here, and here. The method they chose was rather impressive. The design was innovative, simple, and involved no active electronics. Using sound waves striking a metal diaphragm, they were able to modulate a reflected signal. The metal diaphragm formed one half of a capacitor. As the capacitor changed in value, the resonant circuit (resonant cavity) it was attached to changed frequency. When "lit" with several watts of RF, it created an echo much like RADAR, which instead of changing frequency in response to change in direction (Doppler shift), changed frequency based on noise in the room. When researching bugs and other covert listening devices, the term "non-linear junction detector" is often cited as a way to detect bugs.

From my radio hobby, I know that non-linear junctions are the bane of radio communications. Interference to repeaters may often be found as a simple rusty bolt or a rusted tin roof anywhere in the vicinity of any of the antennas radiating signals. The product of the mixing signals is the interference. Notice the words "mixing" and "product". Any time that a signal is intentionally "mixed", several "products" result. The most common device for "mixing" RF is  -- suspenseful pause -- a "diode" "mixer".  A "diode" is by definition a non-linear junction. And interestingly enough, a diode may be made out of an evacuated glass tube.

Now, what if you take two carriers in the same band and cause them to mix together and create a signal on a known, secondary frequency?  This is astoundingly easy to do at microwave frequencies, where one carrier and another may easily be offset by a few tens of megahertz easily, yet still be within the resonance regions of the antenna attached.  By taking a signal at 10,100 MHz and another signal at 10,200 MHz, the resulting product, or heterodynes will be at 20,300 MHz, 10,100 MHz, 10,200 MHz, and 100MHz. It is necessary post-mixer to employ a filter to remove the unwanted frequencies present.  At 10 and 20GHz, this filter may be implemented in microstrip. In the same fashion, the antenna itself may be made out of microstrip, simply etched on to a piece of PC board. Or it may be constructed using conductive tape attached to points on a PC board. Mixing carriers presents it's own problems. Most mixers are low-level; to create a large post-mixer signal, the mixer must be selected or engineered to for the levels present.

By making a small PC-board which contains resonant antennas at microwave frequencies, a feed-horn of sorts may be created. It is possible to transmit two signals at one time, using antenna or signal polarities which are at opposites of each other. These opposing signals may be either horizontally- and vertically-polarized, or left-hand and right-hand circularly polarized. It is necessary to use signals at the peak of perpendicularity or orthogonality to prevent cross-talk between the mixer inputs which would otherwise subtract the product from the output of the mixer. By using vacuum tubes as a mixing device, a low-power heterodyne may be generated using some fraction of the power directed into the input of the microwave antennas. The result is a remotely-located transmitter on the frequency of interest, without presence of any transmitting equipment at that location. This has the benefit of confounding direction finding equipment. One may further obscure the location of the microwave transmitting equipment through the use of plane reflectors or metallic billboards. It is possible to push this one step further and remove the transmitter another step by using an on-channel active repeater to hide the location of the actual transmitter itself. By using the above methods and suggestions, it's possible to create a throw-away device which may be replaced or placed in a different spot each time, yet still radiate on the frequency of interest. And that's why I'm going to jail for being excessively innovative....

Why Scalablity Matters

Several years ago, everyone was going nuts about scaling, how to make applications scale -- so that we, the administrators, engineers and creators of the applications could build out our infrastructures accordingly and -- scale the application by growing the pile of hardware under it. Google did this best, using three servers in the datacenter to provide the storage necessary to support gmail and other applications. The failure of one system resulted in the remaining two computing parity of that storage, but providing still providing that information much as a RAID5 controller in a server would, yet without the added cost of that add-on controller or the labor to install that controller. There are many examples of innovative ideas that Google has been able to harness; this is not meant to be an exclusive list.

 In attempting to scale applications, we tried to break the parts apart so that we could take best advantage of the computing power we had available at commodity rates -- single or dual processor white boxes. With the invention of the multi-core processor however, the ability to scale a single-threaded application quickly became extremely important.  The hard limit of 2.8GHz of cheap silicon capped processing speed. No longer could processing be monolithically streamlined into a single processor core. It is now necessary to atomicly multithread applications to take advantage of the multiple cores, and to divide the processing power among as many cores as possible. In effect, we've created Crays on a single slab of silicon -- multiple processors connected together with high-bandwidth interconnects. And like the users of those Crays, we have to sit back and think about the problem mathematically and determine how many pieces we can break the problem into to get them processed and completed in the environment we have been given by the manufacturers. We now have 42U of white-box server processors in one or two pieces of silicon in a 2RU box! Technology is grant indeed, but can our educational models keep up? Can we truly impart the necessary mindset on the current generation of computer scientists to allow them to keep an open mind and solve the problems before them with any combination of hardware the manufacturers provide?

As a systems administrator, my job has become difficult. In previous times, it was easy to watch the system load and processes and determine when a server was loaded enough to provision more hardware, or to kill off rogue processes that were hogging resources. In the present day and age, it is now necessary to determine how many processors are in use, and that the workload presented to them is properly distributed. While my application administrators are able to spread the load over multiple servers, each server should be appropriately loaded before spending money allocating the next server. So both the multi-core server, and virtualization technologies play into each other. By reducing the amount of spare capacity to a minimum, costs may be kept down. Excess capacity may be sold to other organizations (this is what Amazon does). At the end of the day, my bosses and customers are demanding the most services, with the most uptime with the least amount of dollars spent. And I want to deliver that.

But I cannot do that if my vendors and application providers do not support me. Contracts that specify a fixed number of cores the application may be run on limit the infrastructure that I can deploy to support that application. I may have 4 2.8GHz cores in one server, and another with 32 or 64 cores running at 900MHz. If I run the cluster-designed software on the 64-core server, it's going to make anything running at less than 32GHz and quad-processor look like a VW Diesel Rabbit at an F1 race. We need to re-think our billing paradigm based on per-core performance, and break our application into atomic chunks that can take advantage of the cores and cheap IO we have now. We've scaled a rack's worth of computing into a single chip. We're not getting faster, we're getting smaller. We need our applications and OSes to support us and adapt to the conditions. And we need our programmers to be open-minded about where changes in the electrical engineering world will take them. Look at breaking the problem apart instead of building one pipeline to achieve an answer. Hopefully, we can make our future brighter, one red-hot screaming CPU core at a time.

SDSL Pipelines

Or what I know about them now that you don't.

Years and years ago, I got introduced to the Pipeline series of routers by a friend and employer. As a way to save money, he had obtained an Ascend Pipeline P130 and used it to replace a Cisco 3640, saving some $500 a month in router rental. The catch was that I had to figure out how to make it talk to the provider's circuit. This was easier than realized, once I had them on the phone and they told me they could switch the DS1 over to PPP instead of Cisco HDLC. When they did, the circuit came up. A few months later I attempted to the same trick with a new provider network and a different version Lucent Pipeline P130, and had absolutely no success for several days. Finally, I noticed the "Nailed Group" number on the new router was set to "1" and the old, working router was set to "3". I changed the Group over to "3" and got lock on the WAN light.

After much consternation with discovery of more sub-variants of Pipeline Routers than there are species of cacti, I finally succeeded in obtaining two SDSL Pipelines which were close enough in feature set so as to support communication with each other. These routers turned out to be Lucent Cellpipes, specifically the DSL-CELL-50S.

Of course, they proved again to be a source of MUCH consternation, as no setting seemed to convince them to talk to each other. Having worked in a xDSL test lab some years ago for a major chipset manufacturer, I knew these could talk to each other, as I had seen one used as a CO device, and read the specs myself and knew that any device that could provide the head-end side of the circuit. Knowing that ATM was intimately involved in the lower DSL levels, I opted to configure the modems in ATM-VC mode, and configured them to bridge.

Bridging on the Pipeline is a very dodgy proposition. Most of the providers used bridging, which lead to the Pipeline getting a bad reputation. At only 25MHz or so of CPU, the processor would attempt to bridge the entire traffic of the 10Mbit ethernet segment onto the WAN circuit, while copying the incoming WAN circuit data out the ethernet port. The end result was that the CPU was constantly overwhelming, servicing interrupts continuously and trying to IO itself to death. On the other hand, if you forced it to do only IP routing, the little bastards flew, happily updating the interactive terminal interface while responding to SNMP. I once schemed to build a BGP router out of a FreeBSD PC using a pair of Pipeline P130s bridging ethernet to DS1s and terminating the PPP session on the PC using PPPoE. Unfortunately, our network never got large enough for that. Such were the days of the DotComs and the fickle ways of the investor.

Back to configuration of the Pipelines,  I set the DSL layer to communicate using ATM VCs over VPI 8 and VCI 35. In the lab we expressed this as "8.35", or "0.35" depending on what port we were using. On the outside, the inner workings of ATM were too complicated for most people to understand, so the VPI.VCI just became another meaningless setting that HAD to be input exactly for the system to work. After the ATM was configured, I rolled back a step and set the modems to communicate at a single DSL speed ("mode=singlerate") of around 2.3Mbit/s. Now the modems started attempting to lock to each other, but kept dropping the call. I set one to COE, and the other to CPE, then configured the COE unit to only answer the call, and the CPE unit to only call, never to answer. Now the calls were being initiated, but data was not flowing. "What the hell could the problem be?" I asked myself.

Fortunately, the several modems I had gather had documentation. One of them actually had the xDSL specific menu addendum, but all of them explained in a physical layer independent method how Ascend Bridging worked, and how the PPP system was used. "Knowing" how PPP worked and that the circuit was largely free of eavesdropping, I configured the units to use PPP over ATM (PPPoATM) to pass traffic to each other. The PPP system was setup to use PAP, and an arbitrary password was selected and configured into both modems as the recieved (expected) and the sent password. Likewise, I named both of the DSL modem's names (or names under Connections) to "DSLPipe". This way, both modems would ask each other for the same bits of information, allowing either modem to assume the role of COE or CPE depending on how I got them configured.

At this point, I have two modems configured to only use 2.3Mbit/s as a rate, ATM-VC 8.35, PPPoATM, and Ascend Bridging over the PPPoATM. As soon as the call went up, traffic started flowing! Now to start tinkering...

First change was to turn of VJ compression and header compression. This brought throughput up to 2.0Mbit/s instead of 1.024Mbit/s. I left bridging on, as I didn't want to explore networks at this time, since most of my home network was a single LAN anyway, and I lacked other devices I could plug in and test connectivity with. Finally, I set both modems to autobaud, with the baserate set to 2.3Mbit/s and the SDSL data rate set to 2.3Mbit/s. They auto-negotiated as expected, and dealt with the interruptions as expected. Finally, I set the circuit type to Nailed on both sides, so that if either end went down, an attempt would be made to restart the connection.

And the data kept flowing. Not entirely bad for only about four hours of work. And the two modems were connected over an eight foot piece of RJ-11 patch cable.

So now you know what it takes to make those bloody modems work. Next project is to attempt the same with the V.35 Pipelines, which are basically a retarded version of the modem I already described.

Originally, I got these modems to attempt a bidirectional data link over 900MHz. By cutting away the resistive Wheatstone bridge-hybrid in the front end of the modem, it is possible to separate out the receive side from the transmit side. By piping these signals to separate block frequency converters (heterodyning transverters), it is possible to put them on the 900MHz amateur band. The benefit to using DSL modems for this purpose is that they already possess adaptive electronics in programming, which allows them to redistribute the bits as the channel capacity allows. If, for instance, a carrier appears and stays in a given place, the modems may alter speeds or re-profile the channel. This allows me to focus on getting the bits where I want them to go, and fiddling with RF, leaving the DATA layer delegated to prior art.  Thank you for reading what may be my longest post ever.

Also, this information is copyright 2010 by Kris Kirby, and all rights are reserved. You may not use any of this information in support of an eBay auction.

It is important to remember that ATM is involved here, so there is a percentage of that which cannot

MASERs and Why We Don't See Them

Over the past few months, I've given thought as to why we don't see large scale development of MASERs. MASERs, which are like LASERs but generate radio frequency (RF) waves instead of light waves. Both light waves and radio frequency waves are electromagnetic waves. We have high power, coherent lasers used for cutting steel. However, we do not at the present time have high-power MASERs. Initially, I assumed this was because it would be the topic of strictly militaristic endeavors. And while that is true, the body of evidence shows that MASERs didn't develop into high-power variants because of different reasons -- there was no need. Beyond generating a stable frequency reference through cesium or hydrogen emission, there was no need seen in generating a high-power single frequency radio emission. I believe this may be due to the fact that radio waves are seldom coherent, and therefore may have been difficult to excite in a precisely controlled manner. So my conclusion is that the hydrogen and cesium MASERs do exactly what their name implies. The technology is mature. Unfortunately, the lack of a high-power derivative limits the offensive use of the technology, as well as use in communications. And we've gone on to use MASER-like technology to excite hydrogen atoms with magnetic fields -- we call them "MRI", or Magnetic Resonance Imaging.

About this Archive

This page is an archive of entries from April 2010 listed from newest to oldest.

March 2010 is the previous archive.

May 2010 is the next archive.

Find recent content on the main index or look in the archives to find all content.