Recently in Hacker Category

Kitchen Hacks

As mentioned in an earlier post about UPS loads, there are devices in the kitchen that can be better used with a hack or two. In my case, I find that the default settings of the (now) small George Foreman grill are entirely too high to cook something such as a half-inch thick steak. The result is that the center is rare (if barely defrosted) and the exterior is almost carbonized. Well, that's unacceptable.

Fortunately, I can deal with that. The George Foreman grill is rated for 760 watts. Since most dimmers are rated for around 600W, this is pretty close. The net result is that the dimmer may fail in a spectacular way. Basically:

Purchase:

1 short high-power appliance extension cord (14AWG or 12AWG)
1 dimmer
1 cover plate
1 metal outlet/switch box
2 Romex or other wire clamps

I assume that you have wire nuts on hand to tie the wires together.

Cut the extension cord in two.
Install the Romex clamps into the metal box.

Connect a (14AWG) ground wire to the box.

I use metal boxes. While they are not friendly to the counter top, they do provide a solid surface for grounding, and some protection, should the bit inside decide to explode.

Connect a ground wire (14AWG) to the dimmer.
Wire-nut the ground wires for the box, dimmer, input (plug) and output (outlet) of the extension cord.
Wire-nut the neutral (white) wires of the extension cord together.
Connect the black wire plug-side of the extension cord to the input side of the dimmer.
Connect the black wire outlet-side of the extension cord to the output side of the dimmer.

Install the dimmer into the box.
Check your wiring for shorts, etc.
Screw on the cover plate carefully so you don't crack it.

Cook. Just don't run more than 775W of load on the dimmer. The George Foreman grill, as tested, is a resistive device. I find that turning the grill down until the neon bulb is just barely lit (and if you turn it down lower, it will go out, but you can turn it back up to make the bulb come back on), you're at a good temperature for grilling some steak without burning and/or undercooking it.

Disclaimer: Don't follow any of these instructions under any circumstances for any purpose whatsoever. This information is presented for purely entertainment value only. None of this information has been checked for correctness or compliance with any electrical codes. Author is not a licensed engineer, electrician, chef, cook, or lawyer. Author is not responsible for any fires which may result from the use of this information, kitchen, electrical or otherwise. Author not responsible for illness as a result of eating undercooked food. Cook all food completely, preferably in an autoclave or medical grade incinerator.

Packet Radio / TNCs

Thoughts/observations:

Speed is important. Baud rates are limited by law, but Baud doesn't equal signal rate; baud is the baseband. Using QPSK, you can double throughput, and you can still throw bits away in FEC if you need to. Phil Karn is a huge proponent of this and for good reason; he had a critical role in developing Qualcomm's satellite-based terminal systems used by truckers everywhere.

6m might be good for this, but easily obtained radios (Motorola Syntor Xs, GE Deltas, etc.) are starting to disappear. Also, they are large, and without a small TNC/node hardware that fits inside the radio, there is little reason to deploy equipment because more parts can break. Of course, the radios themselves need about 30A on transmit, so that would also need to be dealt with in a manner that doesn't impact size and site power requirements.

There's very little reason why we can't push soundcard packet into smaller systems like the Alix line of micro-PCs. We can dedicate an Arduino to software packet detection, another to node/routing, and pass the data on to a host if need be. Or we can load the sound-card engine into memory as a TSR and boot the thing using DOS and a G8BPQ stack. With TNC-X, a KISS TNC, it's possible to do that and more. Ideally, speeds upwards of 19200 and full-duplex are desired. However, full-duplex generally requires real hardware on the "node" side of things, and duplexers are a generally fixed commodity.

Also, proxy ARP may be a better way to use TCP/IP over AX.25, stealing information directly from the NETROM maps or something. For instance, my state net is 44.100.x.x, but good luck trying to actually get any of that to route outside of Mirrorshades.ucsd.edu.

Really, something closer to the mesh-networking systems used for the next generation wireless networking systems would be better. To gracefully handle losing a node and multiple routes present in the network stack. You can't really do that on a Z80 with 16k of RAM at 10 MHz.

IPv6 needs to be implemented at some point, with a graceful handling of IPv6 addresses to allow for compacting unnecessary zeros.

Software TNCs/Minimal TNCs:

AVR:
http://www.byonics.com/tinytrak4/
http://vk7hse.hobby-site.org/wiki/index.php/Main_Page
http://www.garydion.com/projects/whereavr/

Arduino:
http://www.adafruit.com/blog/2010/06/11/aprs-radio-shield-for-arduino/
http://www.arduino.cc/cgi-bin/yabb2/YaBB.pl?num=1232745624
http://dangerousprototypes.com/2011/01/31/packet-radio-and-the-arduino-radio-shield/
http://wiki.argentdata.com/index.php?title=Main_Page
http://forums.adafruit.com/viewtopic.php?f=10&t=11763
https://sites.google.com/site/ki4mcw/Home/arduino-tnc
http://mhvlug.org/pipermail/mhvlug/2011-April/031359.html

PIC:
http://www.ringolake.com/pic_proj/pic_index.html

HF:
http://www.brazoriacountyares.org/winlink-collection/AGW/PE%20Pro/pehelp/6hf.htm
http://www.tapr.org/pr_intro.html
http://wa8lmf.net/ham/30m-magloop-ant.htm

Packet general:
http://www.kc2rlm.info/soundcardpacket/6modes.htm
http://www.enide.net/webcms/index.php?page=wb8wga-tnc
http://www.amsat.org/amsat/articles/kd2bd/9k6modem/
http://nonbovine-ruminations.blogspot.com/2008/05/ham-radio-internet-and-cell-phone.html
Buck's articles: http://www.buxcomm.com/catalog/ (look down on the left-hand side)
GMSK:
http://www.southgatearc.org/articles/highspeedpacket.htm

Ham general:
http://hamlib.sourceforge.net/

Nodes:
http://digined.pe1mew.nl/?Introduction
http://www.ir3ip.net/iw3fqg/uidigi-e.htm
 -TheNet:
http://vectorbd.com/bfd/thenet/index.html
http://nl3asd.tripod.com/thenet.html
http://g8kbb.roberts-family-home.co.uk/html/thenet_x-1j.html
http://servv89pn0aj.sn.sourcedns.com/~gbpprorg/Radio_Mods/MISC/PACKET/PACKX09.TXT
http://kf8kk.com/packet/jnos-linux/thenet-ops-1.htm
- IP use in TheNet nodes:
http://www.w7eca.net/forum/viewtopic.php?f=84&t=444
 - TheNet replaced by NOS:
http://62.49.17.234/thenet.htm
 - JNOS:
 INP something. I dunno...
edit: Ah, here it is. A European internode protocol:
http://dl6mpg.net/nordlink/ftp/pub/documentation/INP/inp3.pdf
 http://www.langelaar.net/projects/jnos2/documents/inp2011.txt
http://www.langelaar.net/projects/jnos2/news/
Intro to NOS: (packet sizes, numbers)
http://www.febo.com/hamdocs/intronos.html
http://kf8kk.com/packet/jnos-linux/whetting/whetting.htm
FlexNet:
http://www.afthd.tu-darmstadt.de/~flexnet/

DX cluster:
http://www.ab5k.net/Home.aspx
http://www.dxcluster.org/main/index.html

Antennas:
http://wa8lmf.net/ham/30m-magloop-ant.htm


Old Huntspac stuff:

http://www.qsl.net/n8deu/huntspac.htm

Most of the older stuff I saw fall out of use as people got older and fell into different modes / cliques / clubs



The Worst Possible Idea Must Be Implemented

No, I don't mean this in a sarcastic sense. I mean that when you have the worst possible idea, you must implement it. In my case, it was while building a Jumpstart server after building a Kickstart server. I have a Dell Optiplex GX1p (450MHz) missing most of the plastic (so it's really more of a Dell T1000 Optiplex). While DBAN'ing any number of SCSI and IDE hard drives (of almost every interface variety imaginable), the Optiplex would routinely reconfigure it's BIOS to change the boot order. After enough times of overriding it and seeing the PXE boot banners go by, I figured, what the hell?

Download pfSense, Debian Linux, setup a router and firewall in a virtual machine and provision another DHCP and TFTP server for PXEBOOT. Download a copy of DBAN. Put in blender. Three days later (and all typos removed one by one), I have DBAN in the boot menu for the PXEBOOT server. And it's set for auto-timeout. Yes, the worst possible idea must be implemented. Plug into my network and don't ask me, hope that you don't have your box configured to PXEBOOT. >=D

Original idea comes from Joako  over at DSL Reports and this post: http://www.dslreports.com/forum/r24834879-How-To-PXE-Boot-DBAN.

However, I found I kept having issues with DBAN booting. Finally, I cut it down to as few menu items as possible:

label DBAN
        MENU LABEL DBAN
        kernel dban/dban.bzi
        append nuke="dwipe"


label DBANautonuke
        MENU LABEL DBAN Autonuke
        kernel dban/dban.bzi
        append nuke="dwipe --autonuke" silent


And thus it was dangerous. Once ONTIMEOUT is set to DBANautonuke and DEFAULT set to DBANautonuke, only the TIMEOUT value saves you. Timeout is in tenths of seconds, so a TIMEOUT of 200 is 20 seconds.

So the recap here is that you configure a PXEBOOT server in your favorite fashion (everyone uses different paths, pick a tutorial (RedHat, IBM, etc.) and go through it), mount the DBAN CD on a mountpoint (/media/cdrom or /mnt or /cdrom or whatever) and copy over the contents of the CD to your tftpd home directory. I tar'd the files up into a dban.tgz tarball and put them into a directory named "dban". dban/isolinux.cfg tells you everything you need to plug into the pxelinux.cfg/default file for menu.c32 to know about.

Good luck, and happy trails.

The other fun part of this was configuring pfSense to do some routing and firewall work. Where I usually work, DHCP server are verbotten, so one must take a few precautions to make sure that evil DHCP packets aren't forwarded. Also, if there is a internet-facing port, one must take pains to assure that packets go in the correct direction. Download pfSense install CD image, fire up VMWare; install pfSense to the hard drive of the virtual machine. Give the virtual machine four interfaces; LAN, WAN, OPT1, and OPT2. LAN is 172.16.0.1/24, WAN is the internets feed, and OPT1 is a /30 (172.16.10.1/30) to the ethernet switch (172.16.10.2/30) for out-of-band management. pfSense handles the NAT and knows about routing.

In another virtual machine, a Debian Linux server sits with two ethernet cards. But I ran out of ethernet ports on the physical hardware, so the PXEBOOT server is a one-armed router. To accomplish this feat against ISC-DHCPD3's best wishes, I had to think a bit.

My first hurdle was getting interface aliases configured to start automatically in Debian. Not so difficult with The Googles:

/etc/network/interfaces:

iface eth0 inet static
    address 172.16.0.2
    netmask 255.255.255.128
    network 172.16.0.0
    broadcast 172.16.0.128
    gateway 172.16.0.1
    dns-nameservers 172.16.0.1
    dns-search lan

auto eth0:1
iface eth0:1 inet static
    address 172.16.0.129
    netmask 255.255.255.128

So 172.16.0.1 is the pfSense router (with DHCP turned off and DNS Forwarder on), 172.16.0.2 is the "outside" IP of the PXEBOOT server, and 172.16.0.129 is the "inside" IP of the PXEBOOT server.

Then DHCPD had to complain:

pxeboot:/etc/dhcp3# /usr/sbin/dhcpd3
Internet Systems Consortium DHCP Server V3.1.1
Copyright 2004-2008 Internet Systems Consortium.
All rights reserved.
For info, please visit http://www.isc.org/sw/dhcp/
Wrote 3 leases to leases file.
Interface eth0 matches multiple shared networks
pxeboot:/etc/dhcp3#

What the deuce?

Googling was marginally useful. I've seen this problem before however. I couldn't remember what the exact reasoning behind why it behaves this way, but I remembered reading a message from Paul Vixie about it that explained the rationale or behavior. And I believe it was a compile time option or some oddball flag that one had to set to get rid of the error message. After racking my brain for a bit, I settled on the idea that DHCPD had too much information. So I commented out the first subnet definition (172.16.0.1 255.255.255.128):

#subnet 172.16.0.0 netmask 255.255.255.128 { }
subnet 172.16.0.128 netmask 255.255.255.128 {
        range 172.16.0.130 172.16.0.253;
        default-lease-time 14400;
        max-lease-time 38400;
        option subnet-mask 255.255.255.128;
        option broadcast-address 172.16.0.255;
        option routers 172.16.0.129;
# comment below out if the machine's name will be something else.
        option domain-name "lan";
        filename "pxelinux.0";
        next-server 172.16.0.129;
}

Of course, the PXEBOOT server still has the default route set to 172.16.0.1, so if packets get there, pfSense should know what to do with them (and if it doesn't, I don't care because this server was designed for DEATH ;). In all seriousness, this server will have to coexist in the near future with a Jumpstart server, so the ability to alter DHCP capabilities is welcomed.

So the PXEBOOT server is a one-armed router/DHCP server, serving out DHCP for a network that isn't routeable while another network on the same wire is.  And the pfSense firewall provides inward connectivity for SSH, while keeping off-LAN users out. Through the OPT1 interface, some NAT work and a firewall rule, telnet and the web interface for the HP Procurve 4000 are made accessible to the LAN, while denying remote users the ability to get to it.

Some of the complicated stuff I enjoy, other things I needlessly complicate. But for good reason. =D

The Shooting Gallery

Along the ways of my travels, I acquired a Sony VPH-1251, a one hundred and forty pound boat anchor of a projector. Like most older gear, it merely sneers at outlets. No, to fire up this epic beast of a projector requires no less than a virgin sacrifice, along with an anointing of blood, and a mystical incantation while pushing the "ON" button. And you have to notify the power company a week in advance.

At least that's what you'd expect me to say. Truthfully, it's a much more well-behaved unit that modern projectors. The reason why is that when this projector is in black, it is drawing a minimal amount of power, as opposed to modern projectors, which use a lamp as a light source. Since the lamp is held at the same power level all the time, the wasted energy is given off as heat and light. On the other hand, my CRT projector comes up in white for twenty minutes to allow the tubes to warm up. This process may be bypassed (as I often do). Power consumption is 250W in black, 400W in white -- the power demand changes with the image produced. The caveat thus far is that for the power consumed, very little light is generated, though it's somewhere north of 150 lumens, peaking to much higher (650 lumens) if small parts of white are needed.

The technogeek may appreciate a subtle feature of the older CRT projectors -- they sense magnetic fields. Periodically, about every six months,  one must realign the red and blue projection images to align with the green (middle) one. So the alignment can change depending on how much metal you have under the projector, as well as due to natural disaster or fluctuations in Earth's magnetic field.  

Curt Palme ranks it as a beginner projector; when you consider it's 1992 vintage, and the computing and video of the day, it was a respectable projector. It's capable of handling raw VGA video to 1024x768 -- not much in today's world, but more than enough for HDTV. My projector has some 3,000 hours on it, and if the tubes evenly age, the projector can be used to 20,000 hours. Nice investment for "free", eh?

Goodness, if I've wasted this much time talking about the projector, one wonders what I am to espouse about the Nintendo...

There's not a lot to say about the Nintendo, or the Zapper. The Zapper works because of some special programming in the Duck Hunt cartridge that briefly turns the screen black while turning the duck or clay white. So one of the secrets to getting this game to work is to reduce the noise (stray light) or increase the contrast. Since I tested this at mid-day, I wasn't able to reduce the stray light any more than it already was. So I turned the contrast up to wide open (100, when it's normally set to 50), and the game started magically working. So the next step was to start turning the contrast down. I found I was able to get the game to work at about contrast level 62. I will be testing it when the sun goes down to see if things improve.

So make sure your game is working properly -- that you see a flash of black -- and give it a try. And don't forget to give the projector a break afterwards, perhaps showing some static!

How I got TV-Out working on a D/Port with a D600

JTFC. If you have a profane mind like mine, you can pretty well figure out that acronym. If you don't, please don't try.

After much consternation, swearing, praying, and outright worship of the occult, I finally got TV-Out working on my latest new toy, a Dell D-Port docking station. The D/Port works with several of the Dell laptops, but the most common are the D600, D610, and D620. The Dell 8x0 series works as well. Finding the solution turns out to be a random series of events undertaken after considerable research.

The first suggestion I gleaned from the forums and boards was to install the Notebook System Software (NSS). I found the latest and greatest version of this, only to discover after downloading a one hundred megabyte file, that the package did not support my laptop. Back to the drawing board. Second, I was able to find the latest NSS that did support my laptop, and was chagrined to see that NSS was at least two years old and several revisions back from the latest version. Not good news. The only good news was that the correct NSS was a ten megabyte download. Kinda makes one wonder what Dell stuffed into NSS afterward that pushed the package size up by ninety megabytes.

And still TV-Out didn't work. I was almost to the point of pulling my hair out. I've got the latest version of the NSS software, I've got drivers I know to work on another docking station (The D/Dock), but still no TV-Out.

Finally, on a shot in the dark, I searched out the latest version of BIOS for the D600. The latest BIOS was A16. Installed on the laptop was A14. Ok, finally something I can update. I download the package, run the package, and the package asks me if I want to close all applications, flash the BIOS, and reboot. Well, this is certainly a new experience for me, but I'm open to new options and ideas so I went for it.

Surprise! It actually worked. </sarcasm>

Now I have TV-Out working with my D/Port, my D/Dock, and I'm happy... for the moment.

Dell, is to really too much to ask that you make this information more prevalent and -- I dunno, call me crazy -- actually support your customers?

Granted, I've only been dealing with crap like this since the first time I sat down to a PC. Still, when you have a single vendor solution -- and all of the gear is provided by that vendor -- one comes to expect that all of the parts will just work together. I don't think that is an unreasonable request, and I certainly don't appreciate that some companies charge an arm and a leg (or whatever the exchange rate for body parts to dollars or yen is, what with the dollar on the fall)  to integrate software and hardware from the vendor to make it work together.

The only thing more infuriating is when you realize that all of the gear I've mentioned was cutting edge in 2004, and it's now seven years later and none of this gear was updated to latest revisions until something broke.   

Improvized Power Loads

Being something of a home-bound hacker without a lab other than my own equipment, I find it's sometimes necessary to improvise equipment using other stuff. After replacing batteries in a UPS, I needed to check everything out for loose connections, unusual sources of heat, and so on. It took a lot of effort to take the UPS apart, and I didn't want to have to take it back apart or have any issues inside that would crop up later.

When testing a UPS, or another other power generating device, a resistor is the best type of load to use. Since it's non-reactive, the impedance of the resistor equals its resistance across almost the entire spectrum.

Here's where we go crazy...

First, I tried the toaster oven. Fail. The toaster oven is 1400W. UPS is nominally about 1500 VA, which works up as a few short, because most computer UPSes are overrated in VA because most PC power supplies don't have a .99 power factor.  Good test of the overload capacity.

The portable heater didn't work out for one reason or another, 750 or 1500W. So I was left searching for something that would do the job. The microwave, at 1500W, was also out of the picture. The electric skillet was an amazing 1200W! Finally, I settled on the one obvious solution for some UPS runtime: the rice cooker / steamer.

The steamer's nameplate said 650W at 120V. This was a perfect load for the UPS, as it didn't exceed 1000VA or 1000W, allowing me some actual run-time with the UPS. Since the steamer works through phase change, the actual "output' of the device in steam wouldn't be very much. There would, however be a few cups of hot water in the bottom.

So remember the next time you need to do a test, what heaters you're surrounded with. Just because a heater is designed for 120V, doesn't mean you can't apply it at a lower voltage.

650W / 120V = 5.4167 A. 120V / 5.4167A = 22.154 ohms.

Likewise, were one to attach an 8-ohm speaker across the 120V line, it would need to dissipate 1,800 watts and would trip the breaker on a 15A circuit eventually.

120V / 8-ohms = 15A, 120V * 15A = 1800W.


If you really still want to buy power resistors, and there's no reason not to, you can find them cheaply at Surplus Sales of Nebraska and Fair Radio Sales. Be aware however, that above audio frequencies, impedance may become a factor as the device may start radiating. Just because you can match a HF transmitter to two steamers in series doesn't mean you should use them as an RF dummy load. Gordon West, WB6NOA famously demonstrated this by making contact with a fellow amateur radio operator across the world using a light-bulb as a dummy load.

Broadcasting, Russian Style

or
Why I'm going to jail for being excessively innovative.

I've been reading web pages about some RF devices that are interesting. The Russians bugged The Great Seal of The United States at the Moscow embassy some years ago. You can read about it here, here, and here. The method they chose was rather impressive. The design was innovative, simple, and involved no active electronics. Using sound waves striking a metal diaphragm, they were able to modulate a reflected signal. The metal diaphragm formed one half of a capacitor. As the capacitor changed in value, the resonant circuit (resonant cavity) it was attached to changed frequency. When "lit" with several watts of RF, it created an echo much like RADAR, which instead of changing frequency in response to change in direction (Doppler shift), changed frequency based on noise in the room. When researching bugs and other covert listening devices, the term "non-linear junction detector" is often cited as a way to detect bugs.

From my radio hobby, I know that non-linear junctions are the bane of radio communications. Interference to repeaters may often be found as a simple rusty bolt or a rusted tin roof anywhere in the vicinity of any of the antennas radiating signals. The product of the mixing signals is the interference. Notice the words "mixing" and "product". Any time that a signal is intentionally "mixed", several "products" result. The most common device for "mixing" RF is  -- suspenseful pause -- a "diode" "mixer".  A "diode" is by definition a non-linear junction. And interestingly enough, a diode may be made out of an evacuated glass tube.

Now, what if you take two carriers in the same band and cause them to mix together and create a signal on a known, secondary frequency?  This is astoundingly easy to do at microwave frequencies, where one carrier and another may easily be offset by a few tens of megahertz easily, yet still be within the resonance regions of the antenna attached.  By taking a signal at 10,100 MHz and another signal at 10,200 MHz, the resulting product, or heterodynes will be at 20,300 MHz, 10,100 MHz, 10,200 MHz, and 100MHz. It is necessary post-mixer to employ a filter to remove the unwanted frequencies present.  At 10 and 20GHz, this filter may be implemented in microstrip. In the same fashion, the antenna itself may be made out of microstrip, simply etched on to a piece of PC board. Or it may be constructed using conductive tape attached to points on a PC board. Mixing carriers presents it's own problems. Most mixers are low-level; to create a large post-mixer signal, the mixer must be selected or engineered to for the levels present.

By making a small PC-board which contains resonant antennas at microwave frequencies, a feed-horn of sorts may be created. It is possible to transmit two signals at one time, using antenna or signal polarities which are at opposites of each other. These opposing signals may be either horizontally- and vertically-polarized, or left-hand and right-hand circularly polarized. It is necessary to use signals at the peak of perpendicularity or orthogonality to prevent cross-talk between the mixer inputs which would otherwise subtract the product from the output of the mixer. By using vacuum tubes as a mixing device, a low-power heterodyne may be generated using some fraction of the power directed into the input of the microwave antennas. The result is a remotely-located transmitter on the frequency of interest, without presence of any transmitting equipment at that location. This has the benefit of confounding direction finding equipment. One may further obscure the location of the microwave transmitting equipment through the use of plane reflectors or metallic billboards. It is possible to push this one step further and remove the transmitter another step by using an on-channel active repeater to hide the location of the actual transmitter itself. By using the above methods and suggestions, it's possible to create a throw-away device which may be replaced or placed in a different spot each time, yet still radiate on the frequency of interest. And that's why I'm going to jail for being excessively innovative....


SDSL Pipelines

Or what I know about them now that you don't.

Years and years ago, I got introduced to the Pipeline series of routers by a friend and employer. As a way to save money, he had obtained an Ascend Pipeline P130 and used it to replace a Cisco 3640, saving some $500 a month in router rental. The catch was that I had to figure out how to make it talk to the provider's circuit. This was easier than realized, once I had them on the phone and they told me they could switch the DS1 over to PPP instead of Cisco HDLC. When they did, the circuit came up. A few months later I attempted to the same trick with a new provider network and a different version Lucent Pipeline P130, and had absolutely no success for several days. Finally, I noticed the "Nailed Group" number on the new router was set to "1" and the old, working router was set to "3". I changed the Group over to "3" and got lock on the WAN light.

After much consternation with discovery of more sub-variants of Pipeline Routers than there are species of cacti, I finally succeeded in obtaining two SDSL Pipelines which were close enough in feature set so as to support communication with each other. These routers turned out to be Lucent Cellpipes, specifically the DSL-CELL-50S.

Of course, they proved again to be a source of MUCH consternation, as no setting seemed to convince them to talk to each other. Having worked in a xDSL test lab some years ago for a major chipset manufacturer, I knew these could talk to each other, as I had seen one used as a CO device, and read the specs myself and knew that any device that could provide the head-end side of the circuit. Knowing that ATM was intimately involved in the lower DSL levels, I opted to configure the modems in ATM-VC mode, and configured them to bridge.

Bridging on the Pipeline is a very dodgy proposition. Most of the providers used bridging, which lead to the Pipeline getting a bad reputation. At only 25MHz or so of CPU, the processor would attempt to bridge the entire traffic of the 10Mbit ethernet segment onto the WAN circuit, while copying the incoming WAN circuit data out the ethernet port. The end result was that the CPU was constantly overwhelming, servicing interrupts continuously and trying to IO itself to death. On the other hand, if you forced it to do only IP routing, the little bastards flew, happily updating the interactive terminal interface while responding to SNMP. I once schemed to build a BGP router out of a FreeBSD PC using a pair of Pipeline P130s bridging ethernet to DS1s and terminating the PPP session on the PC using PPPoE. Unfortunately, our network never got large enough for that. Such were the days of the DotComs and the fickle ways of the investor.

Back to configuration of the Pipelines,  I set the DSL layer to communicate using ATM VCs over VPI 8 and VCI 35. In the lab we expressed this as "8.35", or "0.35" depending on what port we were using. On the outside, the inner workings of ATM were too complicated for most people to understand, so the VPI.VCI just became another meaningless setting that HAD to be input exactly for the system to work. After the ATM was configured, I rolled back a step and set the modems to communicate at a single DSL speed ("mode=singlerate") of around 2.3Mbit/s. Now the modems started attempting to lock to each other, but kept dropping the call. I set one to COE, and the other to CPE, then configured the COE unit to only answer the call, and the CPE unit to only call, never to answer. Now the calls were being initiated, but data was not flowing. "What the hell could the problem be?" I asked myself.

Fortunately, the several modems I had gather had documentation. One of them actually had the xDSL specific menu addendum, but all of them explained in a physical layer independent method how Ascend Bridging worked, and how the PPP system was used. "Knowing" how PPP worked and that the circuit was largely free of eavesdropping, I configured the units to use PPP over ATM (PPPoATM) to pass traffic to each other. The PPP system was setup to use PAP, and an arbitrary password was selected and configured into both modems as the recieved (expected) and the sent password. Likewise, I named both of the DSL modem's names (or names under Connections) to "DSLPipe". This way, both modems would ask each other for the same bits of information, allowing either modem to assume the role of COE or CPE depending on how I got them configured.

At this point, I have two modems configured to only use 2.3Mbit/s as a rate, ATM-VC 8.35, PPPoATM, and Ascend Bridging over the PPPoATM. As soon as the call went up, traffic started flowing! Now to start tinkering...

First change was to turn of VJ compression and header compression. This brought throughput up to 2.0Mbit/s instead of 1.024Mbit/s. I left bridging on, as I didn't want to explore networks at this time, since most of my home network was a single LAN anyway, and I lacked other devices I could plug in and test connectivity with. Finally, I set both modems to autobaud, with the baserate set to 2.3Mbit/s and the SDSL data rate set to 2.3Mbit/s. They auto-negotiated as expected, and dealt with the interruptions as expected. Finally, I set the circuit type to Nailed on both sides, so that if either end went down, an attempt would be made to restart the connection.

And the data kept flowing. Not entirely bad for only about four hours of work. And the two modems were connected over an eight foot piece of RJ-11 patch cable.

So now you know what it takes to make those bloody modems work. Next project is to attempt the same with the V.35 Pipelines, which are basically a retarded version of the modem I already described.

Originally, I got these modems to attempt a bidirectional data link over 900MHz. By cutting away the resistive Wheatstone bridge-hybrid in the front end of the modem, it is possible to separate out the receive side from the transmit side. By piping these signals to separate block frequency converters (heterodyning transverters), it is possible to put them on the 900MHz amateur band. The benefit to using DSL modems for this purpose is that they already possess adaptive electronics in programming, which allows them to redistribute the bits as the channel capacity allows. If, for instance, a carrier appears and stays in a given place, the modems may alter speeds or re-profile the channel. This allows me to focus on getting the bits where I want them to go, and fiddling with RF, leaving the DATA layer delegated to prior art.  Thank you for reading what may be my longest post ever.

Also, this information is copyright 2010 by Kris Kirby, and all rights are reserved. You may not use any of this information in support of an eBay auction.


It is important to remember that ATM is involved here, so there is a percentage of that which cannot

About this Archive

This page is an archive of recent entries in the Hacker category.

Crackpot is the previous category.

Linux is the next category.

Find recent content on the main index or look in the archives to find all content.