RSS Feed

Swap in issues being investigated
Thursday, February 6, 2020, 10:26AM; posted by jdbarnes.

Due to what looks like resource allocation issues with VLANs on the core networking of the testbed, users may not be able to swap in an experiment. We are working to resolve this issue.

Potential service outage Wed Dec 11th 5pm
Tuesday, December 10, 2019, 9:13PM; posted by jdbarnes.

We will be adding a redundant power supply to one of the core infrastructure machines on Wednesday December 11th 2019, and this machine has in the past suffered service interruptions when dealing with redundant power supplies. There should be no interruption, but this news item is being posted to notify you in case the introduction of redundant power supplies causes an interruption.

Web Service patching Nov23rd.
Tuesday, November 19, 2019, 10:54AM; posted by jdbarnes.

There will be security patching of some web services for Deter on Saturday November 23rd. If you experience issues with the web interface, please report them using the issue tracking system so we can revert or update patches if necessary.

No power shutdown - Potential bpc node shortage Averted
Tuesday, November 19, 2019, 10:53AM; posted by jdbarnes.

There is a tentative/potential power outage in Berkeley this Wednesday Nov 20th and Thursday Nov 21st, which would affect all BPC nodes. Please be aware and plan accordingly for the shortage of nodes.

Swapin problems
Thursday, October 17, 2019, 5:29PM; posted by jelena.

We are aware of some swap in problems and are working to remedy them.

UCB Campus Nodes Unavailable due to Power Outage Beginning 8:00 pm PDT 8 October
Tuesday, October 8, 2019, 5:40PM; posted by bks.

Our power will be disconnected for fire prevention. We should be back in about two days.

Maintenance Downtime 4:00 - 8:00 pm Thursday 5 September 2019
Tuesday, September 3, 2019, 1:00PM; posted by bks.

Swap in and out will be disabled to update the connection to the Educational Colo facility.

Network connectivity problem (resolved around 10:00 PDT).
Friday, August 16, 2019, 9:10AM; posted by bks.

Maintenance Downtime 5:00 - 6:00 pm Thursday 8 August 2019
Monday, August 5, 2019, 5:18PM; posted by bks.

For intercampus facility upgrade.

System Release Announcement: Containers v1.1
Monday, June 20, 2016, 1:45PM; posted by alba.

All components in the DETERLab Containers System have been upgraded as of June 15, 2016. The host OS for all containers is now Ubuntu 14.04. The QEMU-KVM package was upgraded to 2.51-1 and all OpenVZ components were upgraded to the latest versions.

Virtual Distributed Ethernet (VDE) was dropped as the QEMU-KVM network implementation and replaced with a Linux bridge/tap mechanism. This gives an order of magnitude speed increase for QEMU-KVM networking from roughly 450 MBs to 4.5 GBs.

To find out more, go to:

System Release Announcement: MAGI v1.90
Friday, June 17, 2016, 4:53PM; posted by jelena.

The new MAGI version was released on June 6, 2016, and is available to use on the DETERLab testbed. The new release includes support to develop agents in C. The C-agent library includes support for data management, agent response triggers, and logging. To find out more, go to:

Berkeley tunnel end upgraded to 10 gig.
Saturday, October 10, 2015, 2:28PM; posted by sklower.

The Berkeley end of the intercampus tunnel was upgraded to 10 gig on Saturday

The ISI router and endpoint will similarly need to be upgraded before full use of it can be made.

Please expect additional downtimes where experiments will be frozen in place, but we hope with only limited disconnection time.

Expected Downtime August 17
Thursday, August 13, 2015, 9:38AM; posted by jross.

On Monday, August 17th, we are expecting a power interruption while a faulty UPS is replaced. This power interruption will bring down boss, users, and nearly all of the new dl380g3 nodes.

Please plan your experimentation accordingly!

swapping re-enabled
Monday, July 20, 2015, 6:43PM; posted by jross.

We are back up and runnning! Thanks for your patience.

power loss; swapping locked out
Sunday, July 19, 2015, 12:29PM; posted by jross.

We experienced a complete power loss at ISI and have disabled swapping until we have everything back up and running correctly.

swapping re-enabled
Saturday, March 14, 2015, 12:01AM; posted by jross.

An unanticipated dependency on a switch which we removed cost us some troubleshooting time, but the testbed is now back and ready for your experiments!

Control Net Interruption
Friday, March 13, 2015, 9:46AM; posted by jross.

Some control net traffic had a legacy routing through a switch which we have removed. We are working to re-route this traffic. In the meantime testbed functionality is impaired. We recommend waiting until we have posted that this is resolved before attempting to swap or modify any experiments.

Downtime Alert! DETER buildout begins March 13
Friday, March 6, 2015, 2:15PM; posted by jross.

Starting on the morning of March 13th, we will be upgrading the DETER testbed with new nodes and switches. Importantly, the pc3000 and pc3060 nodes are being replaced with more modern hardware. The pc3000 and pc3060 nodes will be shut down, so any experiments using them will need to be swapped out before the upgrade begins. Testbed capacity will be heavily impacted by this for the week which we are expecting this buildout to take, so please plan accordingly.

Testbed Crashed -> UPS failed 3/3/15 10:30 PST (3/4 6:30 UTC)
Wednesday, March 4, 2015, 1:08AM; posted by sklower.

All the servers went off the air earlier tonight due to the failure of a UPS.

The power has been bypassed to unfiltered; this could mean another sudden crash of the testbed; we hope not. Many of the machines came back up on their own so rather than take them back down again, we completed the restart.

If you can delay your work for a couple of days, ti might be prudent to do so

Problems swapping in
Tuesday, March 3, 2015, 1:39PM; posted by jelena.

We are currently experiencing some problems swapping in experiments and are working to resolve them.

The cause has not yet been determined, although it seems to come and go and the testbed is currently functioning.

Metasploitable2 Image Available
Tuesday, September 2, 2014, 5:00PM; posted by bks.

"an intentionally vulnerable version of Ubuntu Linux designed for testing security tools and demonstrating common vulnerabilities."

Upgrading ports during todays downtime.
Wednesday, November 6, 2013, 6:19PM; posted by jhickey.

I'm using the standard scheduled downtime today to freshen up the ports on boss and users. Swaps should be enabled by 7PM Pacific.

node_list moved to node_summary, node_list restored and make consistent
Friday, October 11, 2013, 5:50PM; posted by faber.

The node_list command now queries the XMLRPC interface as it used to do. Users who have been using node_list to access this function should use /usr/testbed/bin/node_list instead.

The node_list -c flag that outputs containerized node names has been modified in two ways. First, names are produced without the DNS qualifiers as node names provided by other options of this commmand are. A node in a VM container named a will be reported as a not a.exp.proj as earlier versions of this feature did.

The node_list -c flag now reports embedded_pnode containers as well (physical machines). If no containers VMs are present in an experiment, node_list -c and node_list -v produce identical output.

The node_list command is now available as node_summary. It is otherwise unchanged.

Containers now report hostname in same format as physical nodes
Friday, October 11, 2013, 11:25AM; posted by faber.

Users of containers should be aware that both openvz and qemu containers now report hostname as a fully-qualified domain name (FQDN) just as physical hosts do.

A container named a in the tcl file will now report its name as rather than a.

This should simplify moving from physical nodes to container VMs.

Kali 1 Replaces Backtrack 5R3
Monday, August 12, 2013, 12:03PM; posted by bks.

This year Offensive Security named their penetration testing release "Kali" instead of "Backtrack." DETER now supports Kali 1 instead of last year's Backtrack 5R3.

Changes to DETER instructional policies and proceedures, 2013
Friday, July 12, 2013, 2:47PM; posted by sklower.

1.) We will no longer generate passwords in advance for new accounts; when students are assigned to the class they will get an email with the URL for "forgot your password", prepopulated with their class ID, as part of the welcoming letter.

2.) On the day that the end-of class is reached, all accounts will be locked - encrypted password set to '*', ssh keys and ssl certs deleted, web_frozen asserted; and email will be sent to the instructor appraising them of the action.

3.) 4 weeks after the class end date, all accounts will be *wiped* (files removed, email aliases removed, new ssl certs, generated).

2 weeks after the class end date, we will send email to the instructor reminding them of the imminent wipeage. If the instructor wishes to delay this, they can update the end-of-class date.

4.) We will formalize the notion of incompletes; a student account which is marked as being granted an incomplete will not be wiped.

If the incomplete is granted before the end of class, the account will not be locked.

If the incomplete is granted after the class ends but before it is is wiped, the account will be unlocked, but the student will need to set the password again in the manner they did at the beginning of the semester.

5.) Class limits will be strictly enforced for normal students; there will be an entry in the group_polices table setting the limit for nodes to be 0 whenever the instructor has not input a schedule entry.

Instructors must set an end date at the same time or before they enter class limits; nodes may not be reserved past the end date.

Instructors, TA's, and students making up incompletes will be exempt from the schedule limit, and nodes which they have in use will not be counted against a class limit.

The exemption for incompletes MUST NOT BE USED for anything else - if a professor is teaching a grad class and has students who want to parlay a final project into a conference paper, the instructor MUST apply for a separate research project!

6.) Instructors will be unable to assign students to classes until a future end date is set.

We believe the only active class currently using DETER is the ComSecHu project - all others have been assigned a presumptive end date of July 16th, so class accounts will locked on that day, and wiped 4 weeks later.

Regenerating testbed-specific keys this Wednesday Evening
Saturday, January 19, 2013, 5:59PM; posted by sklower.

Clarification:The account was not compromised at DETER itself. This change was based on our experience with an intrusion outside of DETER where the collection of ssh private keys was part of the attackers toolkit.

Due to a compromised account, we believe that it is prudent to configure ssh in such a way that when one logs into users or boss from the outside the sshd will consult an authorized_keys file which is not exported to experimental nodes and would not contain public keys generated on your behalf by the testbed.

A file users:/etc/ssh/external_keys//authorized_keys writeable by you will be created, although you should let the system manage it for you.

In order to log into experimental nodes, and from one testbed node to another, locally generated ssh public and private keys will be where they used to be.

Any public keys that you have previously uploaded via the web interface into the testbed should continue to work, and both places should be automatically maintained on your behalf.

We were, of course unable to run the utility to make the transition during the UN-scheduled downtime on saturday morning, and will do instead on wednesday.

Upgrading Serial Controllers
Friday, December 21, 2012, 4:49PM; posted by jhickey.

Serial console access will be interrupted for the pc3060 machines and the pc3000 machines this evening.

Heavy testbed use until 9/20
Thursday, September 13, 2012, 11:38AM; posted by sunshine.

We have two big research events coming up - one next week and one in October. Users may see shortage of nodes at least until 9/20 as research groups prepare demos. We expect availability of nodes to improve after 9/20. Please let us know if this creates problems for you and we'll try to help you out.

Testbed Downtime 6PM until 6AM Pacific Time this Thursday
Tuesday, August 28, 2012, 3:54PM; posted by jhickey.

We will be hosting a very large experiment during this time. Access to testbed nodes will be restricted and all experiments will be swapped out in order to accommodate the large experiment.

Special Downtime Sunday, August 26 at 12PM PST
Saturday, August 25, 2012, 6:00PM; posted by jhickey.

I need to reboot our main server that controls the web interface and controls experimental swap-ins. This downtime should only last ~15min.

Large experiment this Thursday
Sunday, August 19, 2012, 1:16PM; posted by jhickey.

We will be swapping in a very large experiment this Thursday. Testbed usage will be restricted and active experiments may be swapped out.

Ubuntu1204-64-STD image released
Wednesday, July 25, 2012, 2:14PM; posted by bks.

Enjoy; please file a ticket for any issues.

Experimental Gateway Fixed
Tuesday, July 24, 2012, 2:59PM; posted by jhickey.

We ran into hardware problems with our old Experimental gateway that bridges the ISI part of the testbed with the UCB part of the testbed. I had some new hardware already mostly setup to replace the old machine, so I deployed it this afternoon to get things connected again.

Special Downtime Tuesday, July 17 at 8PM Pacfic
Monday, July 16, 2012, 6:35PM; posted by jhickey.

I will be upgrading the ISI side of the link between ISI and UCB. Additionally I will be moving the serial server responsible for power control for the pc2133 and pc3000 machines. I expect the downtime to last about an hour.

Swaps disabled (enabled now)
Wednesday, June 13, 2012, 9:00PM; posted by jhickey.

I am working on fixing the event system. Swaps will be enabled when things are fixed.

Update: The event system should be fixed now.

Testbed back in order
Sunday, May 27, 2012, 2:01PM; posted by jhickey.

The event system server died on users last night. This was preventing experiments from fully swapping in. Everything should be back up and running now.

Quotas enabled for project/group directories
Thursday, May 17, 2012, 4:49PM; posted by jhickey.

/proj is getting full, so it is time to enable quotas. We have given all projects a soft limit of 10Gb and a hard limit of 30Gb. Please contact us if you have any questions or need more space.

CSET '12 submission EXTENDED to April 26th
Friday, April 20, 2012, 11:12AM; posted by sunshine.

Please consider submitting a paper on your experimental research to CSET '12, held in conjunction with the USENIX Security Symposium, Aug. 6 in Bellevue, WA. Visit for details. Deadline is EXTENDED to April 26th.

PBS NewsHour Story on DETER
Wednesday, April 18, 2012, 2:15PM; posted by jhickey.

Downtime tonight at 10PM Pacific for database work
Tuesday, March 27, 2012, 4:04PM; posted by jhickey.

I will be doing some work on the database tonight. Experiment swapping as well as the web interface will be unavailable during this time. I expect the downtime to last less than an hour.

Windows XP Image updated
Wednesday, March 21, 2012, 6:13PM; posted by jhickey.

The standard WINXP-UPDATE image has been updated to include all recent security patches. Additionally, we now include .Net support by default (Version 3.5 and 4.0 are installed).

Ubuntu1004-STD image updated
Friday, March 2, 2012, 1:27PM; posted by jhickey.

We fixed an issue where Ubuntu1004-STD would sometimes fail to boot properly. Swap in reliability for large experiments should be improved. For instructions on updating your custom images made before March 2, 2012, please refer to the Ubuntu wiki page.

Experiment Isolation being reinstated
Friday, February 17, 2012, 5:31PM; posted by sklower.

We will resume experiment isolation, turning it off and on a couple of times during the scheduled downtime tomorrow (sat 2/18).

With ~30 experiments, swaps should be frozen for about 3 and a half minutes each time, with a period of disconnection for each node being less than 30 seconds.

Berkeley Switch Interconnect Upgraded to 28 - 40 Gbs
Wednesday, February 1, 2012, 7:40PM; posted by bks.

This is more than an order of magnitude improvement. Also, all Berkeley switches are HP Procurves.

Brief downtime tonight at 8PM Pacific: Bad memory in Boss.
Saturday, January 21, 2012, 7:03PM; posted by jhickey.

Boss is reporting that one of the memory DIMMs is failing. I will be shutting down boss tonight to swap the module out. I will also be using the downtime to upgrade the web server on boss.

ISI now using 20Gbe star topology and Nortel10 replaced.
Saturday, January 14, 2012, 2:34AM; posted by jhickey.

All experimental switches at ISI are now connected to a single HP 5412 switch using a pair of 10Gbe interfaces for each of the 6 experimental 5412 switches. This is a major upgrade over our previous experimental topology at ISI which was only partially 10Gbe. Also, the last Nortel switch which was serving the pc3000 machines has been replaced with a pair of HP 5412 switches. The ISI side of the switch upgrade is now complete.

Experimental Switch for the pc3000 machines to be replaced Friday Afternoon
Wednesday, January 11, 2012, 11:06PM; posted by jhickey.

As the final part of the ISI switch upgrade, I will be replacing the Nortel stack that currently supports the experimental interfaces of the pc3000 machines. Additionally, I will be switching our network topology to a star topology with 20 Gbit interconnects. Experimental swap-ins will be limited during this time and experiments that incorporate multiple node types at ISI (this switch currently acts as the head switch at ISI) will have their connectivity disrupted. I should begin work on this around 3 PM Pacific time. Swapping out such large switch stacks takes a fair bit of time. I expect the testbed to be back on the air by 12 AM Friday night.

Experimental switch for pc2133 and NetFPGA machines to be replaced on Monday.
Friday, December 16, 2011, 2:58PM; posted by jhickey.

I will be replacing the Cisco 6509 that is currently functioning as the control network switch for the pc2133 and NetFPGA machines Monday afternoon. The nodes will be unavailable during the upgrade and any experiments using them will be swapped out.

Enhanced Network Delay/Loss functions for DETER
Wednesday, December 14, 2011, 2:39PM; posted by faber.

DETER announces the deployment of the new delay nodes, based on the Click modular router. Beside supporting functions provided by old delay nodes, they facilitate specifying sophisticated loss models that allow burst losses. Moreover, they emulate variable and static delays. Variation in delay is described by Normal, Poisson or Exponential distribution. We encourage users to use them. All you have to do is to replace “make-lan” or “duplex-link” by “make-deter-lan”. For more information please visit.

Experimental Switch for the pc3060 and pc3100 machines upgraded.
Friday, December 2, 2011, 11:37PM; posted by jhickey.

We replaced our aging Nortel experimental stack that serves the pc3060 and pc3100 machines with a pair of new HP 5412zl switches tonight. We will be upgrading the reset of the experimental network here at ISI over the next few weeks.

pc3060 nodes unavailable on Friday, December 1.
Monday, November 28, 2011, 6:44PM; posted by jhickey.

I will be replacing the experimental switch currently serving the pc3060 and pc3100 machines here at the ISI side of DETER. I will start removing these machines from the available pool of nodes starting on Thursday. The switch upgrade should take a few hours and the machines should be back in service by Friday evening. Experiments still using these nodes after 12PM on Friday will be swapped out.

Limited availability of DETERlab experimental nodes, November 11 - 21, 2011
Thursday, October 27, 2011, 3:53PM; posted by jhickey.

During the period November 11-21 (and especially Nov 12-18), an important research program will have priority use of the DETERlab testbed. During this period, users may experience difficulty swapping in large experiments, and at times all users may be preempted by forced swap out. If the nodes you need are available, you can use them, but understand that your nodes may be preempted with little advanced warning. Refer questions to

Testbed updated (New OS, Packages, and updated Testbed Software)
Sunday, September 18, 2011, 8:07AM; posted by jhickey.

Boss and users have been updated to FreeBSD 8.2 with current packages. The package build took much longer than expected. The testbed software has also been updated to the latest in our repo. If you suspect a bug, please do not hesitate to let contact us.

Main testbed servers getting upgraded tonight around 10 PM Pacific Time (Saturday)
Saturday, September 17, 2011, 5:02PM; posted by jhickey.

I will be upgrading boss and users late tonight to minimize disruption. The process should take a few hours since I will be upgrading the packages on these hosts (in FreeBSD packages are compiled from source).

New OS Images: Click Modular Router (Ubuntu804-click and Ubuntu1004-click20)
Thursday, September 15, 2011, 2:31PM; posted by mikeryan.

We are happy to announce official support for the Click modular router on DETER. Please use Ubuntu804-click or Ubuntu1004-click20

BackTrack 5 R1 Image Available
Tuesday, September 6, 2011, 3:21PM; posted by bks.

Last month's release of the BackTrack penetration testing arsenal is available as BT5R1. To use access the node from your desktop, log into the node and start a VNC server:

pcNNN% sudo -H vncserver -geometry 1200x900
then start a viewer on your workstation:
workstation% vncviewer -via users.isi pcNNN:1
This works for tightvnc; you could also start your own ssh tunnel.

FBSD8-STD updated to FreeBSD 8.2
Tuesday, July 19, 2011, 1:04PM; posted by bks.

We updated FBSD8-STD from FreeBSD 8.1 to 8.2. If for some reason your experiment requires 8.1, that remains available as FBSD81-STD.

CentOS5 image updated to CentOS 5.6
Monday, July 18, 2011, 4:33PM; posted by jhickey.

We have mirrored CentOS 5.6 to our local package mirror and updated our CentOS5 image to the latest 5.6. Please file a ticket if you run into any problems.

Serial console restored for remaining PCs.
Saturday, June 25, 2011, 11:39PM; posted by jhickey.

pc001 through pc064 should all have serial console now.

Serial consoles restored for pc001 through pc031. pc032 through pc064 will be restored tomorrow.
Friday, June 24, 2011, 7:33PM; posted by jhickey.

We replaced the failed serial server on Thursday evening, but we are being slowed down because the pin-outs for the RJ45 connectors on the new server are different from the old one. Luckily, we can rearrange the pins on our existing inventory of adapters, but it takes a little time to do.

Serial server for the pc2133 machines down
Thursday, June 23, 2011, 2:25PM; posted by jhickey.

We are experiencing problems with the serial server that serves the pc2133 machines. I am in the process of replacing it, since we were intending to do so anyway. This will take a little bit of time. I expect serial consoles to be back up later in the day.

Introducing ThirdEye is a semantic analysis framework
Tuesday, June 21, 2011, 2:22AM; posted by jhickey.

We have a new analysis tool for researchers to use with DETERlab experiments: ThirdEye. ThirdEye is a semantic analysis framework to identify interesting relationships in network and cybersecurity experiment data. If you want to explore what is going on in your experiment, or automatically ensure every experiment trial is valid, you will find ThirdEye useful.

Read more about ThirdEye at

Rebooting boss and users tonight (Wed, June 15) at 10:00pm Pacific time
Wednesday, June 15, 2011, 7:04PM; posted by jhickey.

We will be preforming a quick reboot of both boss ( and users. Experimental swaps will be disabled during this time.

Bug in New Project form fixed
Tuesday, May 17, 2011, 9:23AM; posted by mikeryan.

If you attempted to create a new project in the last two days and there was a failure, please resubmit your project application. We apologize for the inconvenience.

service outage this morning has been resolved
Friday, May 6, 2011, 9:25AM; posted by mikeryan.

We had an internal service outage this morning which resulted in:

  • a loss of connectivity on the control network
  • massive node shortages

We've tracked it down and fixed it. Connectivity and node availability should be restored to normal.

DETER Chat (IRC) no longer official support medium
Thursday, April 14, 2011, 4:16PM; posted by mikeryan.

We are no longer supporting DETER Chat (IRC) as an official support medium. Please follow instructions on the contact page if you have an issue you need to report.

More bpc2133 problems ... cooling is failing
Friday, April 1, 2011, 4:27PM; posted by sklower.

We may have to power down the bpc2133 nodes quite soon will post again when it is resolved.

Switch Firmware Update Will Disrupt UCB Node Connectivity Friday Afternoon
Thursday, March 24, 2011, 5:11PM; posted by bks.

We hope the disruption will be short, but in the worst case connectivity will be unstable between 12:00 and 17:00 PDT.

Web Login by email address is no longer supported.
Wednesday, March 23, 2011, 6:01PM; posted by jhickey.

DETER no longer supports using your email address instead of your DETER username when logging into the web interface. Allowing the use of email addresses instead of usernames for the web interfaces appears to have caused a fair amount of confusion for new DETER users as to what their username really is. In order to be consistent with the username supplied when logging into testbed nodes and users, we have disabled the ability to use an email address for the web interface.

Rebooting the internet facing firewall at ISI today at 5PM Pacific Time.
Monday, March 7, 2011, 2:23PM; posted by jhickey.

We will be rebooting the ISI internet facing firewall today at 5PM to complete an update. DETER will be unreachable for a few minutes as the machine boots.

Anomaly Removed
Wednesday, March 2, 2011, 5:19PM; posted by bks.

John Hickey fixed the connection between a couple switches at ISI.

Disconnection Anomaly Noted Between DETER Campuses
Tuesday, March 1, 2011, 6:30PM; posted by bks.

Deter Ops have noted an instance of unexpected disconnection between nodes at Berkeley and ISI, ie between names bpcNNN and pcMMM, on experimental interfaces. We do not know how widespread the problem is, but are working on it. bpcNNN can reach pcMMM where MMM <= 60, so the disconnection appears to be between two ISI switches.

Additional Regular Downtimes: Saturday Mornings 10am-1pm Pacific
Friday, February 25, 2011, 1:02AM; posted by sklower.

We will be conducting control-net separation related testing on DETER saturday mornings, on a regular basis.

This coming saturday (2/26) we anticipate a period of about 40 minutes in which experiments may not be swapped in or out; depending on how well things go, we may leave control-net separation in effect.

Nodes within an experiment will be able to comunicate with each other, the boss and users nodes over the control net, but will not be able to send or receive traffic from nodes in other experiments.

Ubuntu1004-STD Updated
Wednesday, February 16, 2011, 12:09AM; posted by jhickey.

I have applied the latest patches and integrated a tweak or two into the standard Ubuntu 1004 image.

bpc2133's are available again
Thursday, February 10, 2011, 6:56PM; posted by mikeryan.

Power to the black box that houses the bpc2133's has been restored and the nodes are available once again.

CSET'11 Call for Papers
Monday, February 7, 2011, 12:05PM; posted by sunshine.

Please consider submitting your paper to CSET'11 conference, held in conjunction with USENIX Security Symposium, August 8, 2011 in San Francisco, CA. This year, we're accepting both regular papers, position papers and extended abstracts. Submission deadline is April 18. For more information please visit:

Downtime extended until 7PM
Saturday, February 5, 2011, 6:10PM; posted by jhickey.

It will be about another hour before things are operational again. Sorry for the delay.

Additional Testbed downtime TODAY until 5PM Pacific Time
Thursday, February 3, 2011, 3:44PM; posted by sklower.

We will be replacing the control net switches between 10:30am and 5pm on Saturday February 5th.

Nodes in your long-running experiments may be unreachable for periods up to 3 hours at a time.

Users upgraded to FreeBSD 7.4-RC3
Tuesday, February 1, 2011, 2:08PM; posted by jjh.

In order to solve some NFS locking problems, we have upgraded to FreeBSD 7.4-RC3. So far, things seem to be working fine.

Rebooting USERS today, Feb 1, at 2PM Pacific Time.
Tuesday, February 1, 2011, 12:10PM; posted by jjh.

We are upgrading to the release candidate of FreeBSD 7.4 in order to fix some NFS locking problems we have been experiencing. This should be a quick downtime of about 15 min.

UC Berkeley connectivity restored, bpc2133's remain unavailable
Thursday, January 27, 2011, 1:47PM; posted by mikeryan.

Our connection at UC Berkeley was dropped this morning, and a backup link has been brought up. The facilities housing the bpc2133's remains unpowered and those nodes are still unavailable.

We are experiencing problems with the UCB side of the testbed.
Thursday, January 27, 2011, 9:26AM; posted by jjh.

We are working on fixing the issue. Please stand by.

bpc2133 Nodes Unavailable
Thursday, January 27, 2011, 9:01AM; posted by bks.

The Sun Mobile Data Center which houses the bpc2133 nodes shut itself down.

CentOS5 Image Updated with NetFPGA support.
Thursday, January 20, 2011, 4:16PM; posted by jhickey.

We now have built-in support for NetFPGA devices in our standard CentOS5 image. Additionally, all recent updates have been applied to the image and we are now tracking RPMforge for packages that are not included in the CentOS distribution.

Ubuntu1004-STD image updated.
Tuesday, January 4, 2011, 3:31PM; posted by jhickey.

We have applied the latest updates to the image and added in a new software repository. Please let us know if you run into any issues with the new image.

Taking disk images and snapshots made easier.
Thursday, December 2, 2010, 7:12PM; posted by jhickey.

We have changed the way that disk images and snapshots are created. We now have links in the 'Reserved Nodes' list in Experiment Information to easily allow you to create new disk images and take snapshots.

By specifying the node up front, we are able to inherit all the metadata from the image that is currently running on the node. This means that end users will no longer need to select partitions or which nodes an image supports. We hope this will eliminate confusion about which nodes an image supports and what partitions scheme is used.

Please contact us if you run into problems with this updated functionality.

Problem with pc2133 machines resolved....
Thursday, November 11, 2010, 6:01PM; posted by jhickey.

The experimental switch for the pc2133 machines has been reconnected with the rest of the testbed experimental switches.

pc2133 machines experiencing problems...
Thursday, November 11, 2010, 3:11PM; posted by jhickey.

We are looking into a problem with the switch that serves the experimental connections for the pc2133 machines.

Event system restarted...
Tuesday, November 9, 2010, 1:35PM; posted by jhickey.

The event system was stuck. We have restarted the service and are looking into the cause. Experiment swap-ins should work now.

Berkeley Nodes back to normal
Friday, November 5, 2010, 6:26PM; posted by jhickey.

The boot problems with the Berkeley nodes have been fixed. The problem was related to the testing of control network separation during yesterdays downtime.

Berkeley nodes experiencing boot problems.
Friday, November 5, 2010, 3:52PM; posted by jhickey.

There appears to be a problem with the testbed nodes at Berkeley causing them to fail to boot from the network. We are still tracking down why they can't boot.

Tracking down network issue with our connection to the world.
Tuesday, November 2, 2010, 5:42PM; posted by jhickey.

Our link to the outside world is currently running at 100Mbit/Half-Duplex. We are talking with our network provider to resolve this issue.

Central Nortel control network switch replaced with a HP switch.
Monday, November 1, 2010, 6:54PM; posted by jjh.

In order to better diagnose multicast problems that we have been having with control network separation, we have replaced one of the main Nortel control network switches with a HP 2810 switch.

Disk Quotas applied to user accounts.
Friday, October 29, 2010, 2:27PM; posted by jhickey.

A quota of 10GB with a temporary maximum of 20GB has been set for all users. Please file a ticket if you require more space in your home directory.

Special Downtime, Monday November 1st from 6PM until 8PM.
Friday, October 29, 2010, 1:57PM; posted by jhickey.

I will be swapping out one of our Nortel control network switches with a HP model. This is the main control network switch, so the entire control network will be interrupted during this downtime. The actual swap should be fairly quick, but I am scheduling two hours in case something goes wrong.

Rebooting users again.
Thursday, October 28, 2010, 9:11PM; posted by jhickey.

Ran into some NFS problems. It will take about 30 min for users to come back on line. Sorry for the unscheduled downtime.

Users rebooted with new kernel to address disk space issues.
Thursday, October 28, 2010, 7:41PM; posted by jhickey.

Users was rebooted with a new kernel with disk quota support. It is currently coming back online.

/mnt/other filesystem is full. This will be cleared up shortly.
Thursday, October 28, 2010, 6:07PM; posted by jhickey.

The /mnt/other filesystem is full and we are working with some heavy users to clean up their home directories.

CentOS 5 image updated.
Tuesday, October 26, 2010, 8:08PM; posted by jhickey.

The same fix that was applied to the Ubuntu1004-STD image has now been applied to the CentOS 5 image.

Ubuntu 10.04 LTS Image updated
Tuesday, October 26, 2010, 7:15PM; posted by jhickey.

The new image fixes a bug in the script which caused incorrect 3rd and 4th partitions to be created. These partitions in turn caused a problem when creating disk images after mkextrafs was run. Also, the installed packages and kernel were brought up to date.

Default Operating System changed to Ubuntu1004-STD
Friday, October 15, 2010, 6:29PM; posted by jhickey.

The default operating system for nodes where an operating system has not been specified has been changed to Ubuntu1004-STD from the old FC6-STD image. Please contact us if you run into problems. moved to new IP address.
Monday, October 11, 2010, 7:30PM; posted by jhickey.

We have moved to a new IP address ( It can take up to an hour for the DNS change to propagate.

Changing's IP address tonight at 7PM Pacific Time
Monday, October 11, 2010, 2:50PM; posted by jhickey.

We will be moving to a different IP address. Trac will be unavailable for about an hour from 7PM until 8PM assuming things go right.

Updating control network switch firmware during the downtime today.
Thursday, September 30, 2010, 3:03PM; posted by jhickey.

We will be upgrading the firmware of the Nortel switches on the control network today. This will disrupt communication between testbed node and users/boss. We do not expect the downtime to be long, but connectivity may be sporadic as different switches are rebooted.

Isolation of Berkeley nodes postponed.
Wednesday, September 22, 2010, 1:21PM; posted by sklower.

The Berkeley Campus network staff has cancelled its upgrade of the core router through which berkeley DETER traffic passes. They have not yet posted the revised downtime. Experiments swapped in will continue to function normally between 5am and 7am tomorrow.

Berkeley blackbox nodes isolated today 4-5:15pm
Tuesday, September 21, 2010, 5:30PM; posted by sklower.

This unplanned outage was the result of an as yet undetermined bug in the control software.

Berkeley nodes isolated 5am-7am thursday 9/23, swaps may not work
Monday, September 20, 2010, 2:52PM; posted by sklower.

The campus router through which the berkeley side of the testbed is connected to the outside world will be undergoing maintenance from 5am to 7am this coming thursday, 9/23.

Berkeley nodes for swapped in experiments will not be able to access the home directories mounted by NFS, and since the testbed switches will also be unreachable from the ISI boss, swap ins/outs/modifies will fail.

When campus is finishes its maintenance, things will start working again on the testbed.

Rebooting and other trac project sites at 3pm.
Monday, September 20, 2010, 12:55PM; posted by jhickey.

We will be applying the latest patches which include a new kernel. trac should be unavailable for a few minutes around 3PM pacific time today.

Updated Ubuntu804-STD and Ubuntu1004-STD
Wednesday, September 15, 2010, 6:48PM; posted by jhickey.

Updated both Ubuntu images to turn off automatic filesystem checks. Also applied any outstanding updates to each image.

CentOS package mirror now updated automatically.
Friday, September 10, 2010, 5:47PM; posted by jhickey.

The local package mirror (scratch) for CentOS 5.5 is now updated daily. This is in addition to daily updates of the Ubuntu repository.

Gateway machines updated at ISI...
Friday, September 10, 2010, 5:45PM; posted by jhickey.

The gateway machines that link the ISI control and experimental networks with UCB have been updated to FreeBSD 7.3.

Additional Downtime Sept 5, 5-7pm
Saturday, September 4, 2010, 1:26PM; posted by sklower.

There will be some more control net separation testing, sunday september 5th from 5pm to 7pm.

Users will not be able to swap in, swap out or modify their experiments during this time, but swapped-in experiments should continue to run, with only about a maximum of 30 seconds of disconnectivity from the testbed at the beginning and end of the period.

It is questionable whether users will be able to reload nodes for swapped-in experiments.

Modifying swapped in experiments works again
Thursday, September 2, 2010, 2:54PM; posted by mikeryan.

We pulled in a fix from upstream, and all appears to be well.

CentOS 5 image updated
Tuesday, August 31, 2010, 2:38PM; posted by jhickey.

We have updated the CentOS 5 image (CentOS5) to CentOS 5.5.

New experiments not swapped in by default
Thursday, August 26, 2010, 11:53AM; posted by mikeryan.

When you create a new experiment the default behavior is to create the experiment but leave it swapped out. If you want to swap it in as soon as it's created, check the 'Swap In Immediately' box.

Ubuntu 10.04 LTS Image updated
Friday, August 20, 2010, 7:43PM; posted by jhickey.

The Ubuntu1004-STD image has been updated to tweak how partitions work (switched from UUID to labels in /etc/fstab) and all recent security updates have been applied.

Using DETER to teach a class this fall?
Thursday, August 12, 2010, 2:01PM; posted by jhickey.

If so, please send a heads up to We need to know you deadlines, TA contacts, and expected class sizes.

Brief downtime this Friday (July 23) from 5PM until 5:30PM PST
Thursday, July 22, 2010, 5:36PM; posted by jhickey.

I will be patching the VMWare server that runs trac, irc, and one of the UCB serial servers. I do not expect this to take more than 30 min.

Downtime extended to 7PM
Tuesday, July 20, 2010, 6:31PM; posted by jhickey.

We are extending the planned downtime for today until 7PM. Sorry for any inconvenience.

Special Downtime from 5PM until 6PM PST on Tuesday, July 20
Thursday, July 15, 2010, 1:47PM; posted by jhickey.

We will be upgrading the switch firmware on the Nortel switches to the latest release which should fix some multicast issues introduced with the previous release. This will affect the control network since the switches need to reboot when the firmware is upgraded. You can track the completion of the upgrade by checking this ticket in trac: (login required). We will also be rebooting boss and users during this time.

CSET 2010 Workshop on August 9th in Washington DC
Tuesday, July 13, 2010, 9:43PM; posted by sridhar.

You are invited to participate in the 3rd CSET (Cyber Security Experimentation and Test) 2010 workshop being held in Washington DC on Monday, August 9th. The Early Bird Registration Deadline is Monday, July 19, 2010 to receive the greatest savings.

Registration and details of the workshop are available at:

Attention, students! CSET '10 has a limited number of student travel grants available. The deadline for applying is July 15.

CSET focuses on the science, design, architecture, construction, operation, and use of cybersecurity experiments in network testbeds and infrastructures. The workshop's scope includes all work relevant to cyber security experimentation and evaluation including simulation, emulation, deployment, traffic models.

We have an interesting and stimulating program this year, including a keynote address by Dr. Doug Maughan, Program Manager, U.S. Department of Homeland Security's Science and Technology Directorate on "The Role of Testbeds in CyberSecurity Research" along with presentations on Cyber Physical systems, emulation testbeds as well as key sessions on Security Education, Work in Progress and brainstorming sessions. In addition, a discussion on "Security Experimentation with Cyber-Physical Devices" is scheduled with a panel that comprises of key practitioners in the field.

We look forward to seeing you in Washington DC!

Modifying swapped in experiments will not work properly
Thursday, July 8, 2010, 1:12PM; posted by jhickey.


If you want to modify your experiment, swap it out first.

We are working on fixing this ASAP. We apologize for any inconvenience.

FreeBSD 7 Image updated to support local package mirror.
Monday, July 5, 2010, 11:04PM; posted by jhickey.

We have placed the binary packages released with FreeBSD 7.3 on scratch and have updated root's .cshrc on the FBSD7-STD image to automatically download from scratch. To install precompiled binary packages on FreeBSD 7, simply use the -r switch with pkg_add.

Ubuntu 10.04 LTS Image released
Tuesday, June 29, 2010, 4:03PM; posted by jhickey.

A new Ubuntu 10.04 LTS image is now available. The osid is Ubuntu1004-LTS. Please file a trouble ticket if you run into problems with the image.

Upgrading users and boss to FreeBSD 7.3 during the Thursday downtime.
Monday, June 21, 2010, 9:54PM; posted by jhickey.

FreeBSD 7.2 is going end of life at the end of the month. I will be upgrading users and boss to FreeBSD 7.3. Both machines will be rebooted during the usual Thursday downtime.

Berkeley bpc2133 nodes once again available
Thursday, June 17, 2010, 7:40PM; posted by sklower.

the cooling system in the berkeley black box has been repaired. The failure was due to a mixing valve under servo control; the water supplied by the building is too cold, and so it it is mixed with recirculated water. The valve at this location seems to fail and need to be replaced every 6 - 9 months; we now keep a replacement part *on site*, but we are not a liberty to replace it ourselves; only somebody from the central campus maintenance division is authorized to inspect, and either replace themselves or hire a contractor, and the office responsible is not staffed on weekends. At least this time we did not have to wait an extra 3 days to have a spare valve manufactured and shipped to us, but it is almost certain that this will happen again in another 6 to 9 months.

DETER Project Review
Wednesday, June 9, 2010, 8:35AM; posted by jhickey.

We are having our project review on June 9th and 10th. Any review related demo experiments will be given priority on the testbed. We do not expect interruption of service as we have already allocated the necessary resources to our experiments.

Downgrading Nortel10 firmware today at 7PM PST
Friday, May 28, 2010, 5:04PM; posted by jhickey.

The control network will be interrupted for about 20 min while I downgrade the firmware on Nortel10 to work around a multicast issue that seems to have been introduced with the latest Nortel firmware.

Extended Control Net testing this week
Sunday, May 23, 2010, 11:11PM; posted by sklower.

Control Net Separation Testing will be happening Monday and tuesday between 3 and 6pm, Wednesday, thursday and friday between 5 and 8 pm

Updated FBSD7-STD image
Tuesday, April 27, 2010, 10:27PM; posted by jhickey.

We have updated the FBSD7-STD image to FreeBSD 7.3. This updated image features larger partitions to make building custom kernels easier and the source for FreeBSD 7.3 is in /share/freebsd/7.3.

Updated CentOS 5 image
Tuesday, April 27, 2010, 3:23PM; posted by jhickey.

The CentOS image has been updated in order to fix a compatibility problem with SEER and to install the latest security fixes.

Special downtime Tuesday, May 4th between 1PM and 3PM PST
Tuesday, April 27, 2010, 2:29PM; posted by jhickey.

We will be enabling control network separation on Tuesday May 4th. There should be about a 15 minute interruption on the control network between 1PM and 3PM PST.

Student travel grants for IEEE Security and Privacy Symposium
Wednesday, March 31, 2010, 12:48PM; posted by sunshine.

There's a significant amount of student travel grants available for IEEE Symposium on Security and Privacy in Oakland, California. This is a premier conference in security and privacy. The eligibility criteria is that one must be a student at a US institution.

For more information and to apply see: The deadline is April 2 but it may be extended.

Rebooting main DETER firewall during the downtime on Thursday.
Wednesday, March 31, 2010, 1:26AM; posted by jhickey.

The main firewall for DETER will be updated to FreeBSD 7.3 and rebooted during our normal downtime on Thursday, April 1. The downtime should be minimal.

CFP: CSET 2010 (CyberSecurity Experimentation and Test Workshop)
Monday, March 22, 2010, 12:22PM; posted by sunshine.

On behalf of the 3rd Workshop on Cyber Security Experimentation and Test (CSET '10) program committee, we'd like to invite you to submit papers on the science, design, architecture, construction, operation, and use of cyber security experiments in network testbeds and infrastructures. Please submit all papers by May 24, 2010, 11:59 p.m. PDT.

Topics of interest include but are not limited to:

  • Science of security/testbed experimentation
    • Data and tools to achieve realistic experiment setup/scenarios
    • Diagnosis of and methodologies for dealing with experimental artifacts
    • Support for experimentation on a large scale (virtualization, federation, high fidelity scale-down)
    • Tools and methodologies to achieve, and metrics to measure, correctness, repeatability, and sharing of experiments
  • Testbeds and methodologies
    • Tools, methodologies, and infrastructure that support risky experimentation
    • Support for experimentation in emerging security topics (cyber-physical systems, wireless, botnets, etc.)
    • Novel experimentation approaches (e.g., coupling of emulation and simulation)
    • Experience in designing or deploying secure testbeds
    • Instrumentation and automation of experiments; their archiving, preservation, and visualization
    • Fair sharing of testbed resources
  • Hands-on security education
    • Experiences teaching security classes that use hands-on security experiments for homework, in-class demonstrations, or class projects
    • Experiences from red team/blue team exercises
    Submissions are due Monday, May 24, 2010, 11:59 p.m. PDT. For more details on the submission process, please see the complete Call for Papers at:

    We look forward to receiving your submissions!

    Terry V. Benzel, USC Information Sciences Institute (ISI) CSET '10 General Chair
    Jelena Mirkovic, USC Information Sciences Institute (ISI)
    Angelos Stavrou, George Mason University
    CSET '10 Program Co-Chairs,

CFP: International Symposium on ICT System Testbeds
Monday, March 15, 2010, 2:30PM; posted by sunshine.

Developments in cloud computing and ubiquitous network computing have increased the reliability and safety of advanced large-scale network systems, and driven the demand for rapid advances in these systems. Enhancing and upgrading the testbeds used to test network systems has become a necessity.

National Institute of Information and Communications Technology (NICT) and Japan Advanced Institute of Science and Technology (JAIST) will hold "International Symposium on ICT System Testbeds" on March 30, 2010.

This symposium includes lectures by specialists from around the world on international trends and research findings in ICT system testbeds, as well as future prospects and expectations, and examines the future direction for ICT system testbeds in Japan.

For detailed information of the symposium program and participation, please visit We hope you will attend this symposium.

LLNL Student Internship Program
Thursday, March 11, 2010, 2:44PM; posted by sunshine.

LLNL is looking for summer interns for their Cyber Defender Program. They are particularly interested in people that had DETERlab experience. For more information and to apply please visit

Special downtime from 5PM until 7PM PST on Tuesday, March 9.
Monday, March 8, 2010, 4:05PM; posted by jhickey.

We ran into problems with multicast after upgrading the firmware on the Nortel control net switches last Thursday. We were able to work around the problem by enabling IGMP snooping, but snooping may be problematic when used in combination with control network separation. We will be looking further into the multicast issue during this downtime.

Rebooting users today
Monday, March 8, 2010, 3:58PM; posted by jhickey.

In order to address some performance issues I have recompiled the kernel on users to take out some extra debugging features. Users will be rebooted at 6PM PST today.

Nortel Firmware updated...
Thursday, March 4, 2010, 7:30PM; posted by jhickey.

We have updated all Nortel switches in the testbed to the latest firmware available which is supposed to fix the vlan creation problem.

UCB to ISI link performance problem fixed.
Thursday, February 25, 2010, 11:11PM; posted by jhickey.

The control net interface on the ISI gateway was set to down 100mbit after replacing Foundry10 causing periodic performance problems for the UCB nodes (in particular whenever nodes were reloading). We have reconfigured the interface back to 1000mbits and things seem to be working better now.

Foundry4 control net switch replaced
Thursday, February 25, 2010, 11:09PM; posted by jhickey.

Tonight during the downtime (and two hours beyond the downtime) we replaced the remaining Foundry switch on our control network with a pair of Nortel gigabit switches. We apologize that the downtime extended beyond the normal 2 hour window.

Replacing Foundry4 control network switch this evening.
Thursday, February 25, 2010, 5:21PM; posted by jhickey.

During today's normal downtime, we will be replacing our last Foundry control network switch with a pair of Nortel switches. The downtime may last a little longer than usual, but we expect the testbed to be operational again by 9PM PST.

Rebooting Nortel10 and Nortel18 experimental switches
Tuesday, February 23, 2010, 7:37PM; posted by jhickey.

We are having issues with these switches and will be rebooting them shortly (at 8PM PST). Sorry for any inconvenience.

Education with DETER page is live
Monday, February 22, 2010, 3:04PM; posted by sunshine.

If you are using DETER in classes be sure to check our new page covering DETER policies and support for educational use. You will also find there sample class exercises.

NetFPGA machine integrated into the testbed.
Friday, February 12, 2010, 5:45PM; posted by jhickey.

We have added a machine with a NetFPGA board into the testbed. It will be primarily used to support some classes, but feel free to contact testbed-ops if you are interested in using the machine when it is free.

Replaced Foundry10 control switch during todays downtime...
Thursday, February 11, 2010, 7:57PM; posted by jhickey.

We replaced an older 100mbit Foundry switch that served the control network for the pc3000 class machines with a pair of Nortel 5510 gigabit switches. We were initially going to do this on Sunday, but we decided to do 1/2 of the Sunday upgrade during the regular downtime. There will be no downtime on Sunday. Instead the remaining Foundry will be upgraded during the normal downtime next Thursday.

Upgrading the control network switches for the pc2133s and pc3000s.
Friday, February 5, 2010, 7:37AM; posted by jhickey.

On the evening of Sunday, Februrary 14th I will be replacing two older Foundry 100mbit switches with Nortel gigabit switches. This downtime should being around 7PM and last a number of hours.

Updated MFS Kernels
Thursday, February 4, 2010, 5:52PM; posted by jhickey.

The kernels for the frisbee, newnode, and administrative operating systems has been updated to FreeBSD 7.2 in order to allow up to add newer hardware to the testbed.

EPEL repository support added to the CentOS 5 image.
Thursday, February 4, 2010, 4:47PM; posted by jhickey.

The Extra Packages for Enterprise Linux (EPEL) has been mirrored on scratch and the appropriate repositories have been added to the CentOS 5 image.

DETER chat working more smoothly now.
Wednesday, February 3, 2010, 4:24PM; posted by jhickey.

We have moved the chat script onto so that users do not have to accept a self signed certificate for Sorry for any trouble using the chat feature.

Beta CentOS 5.4 image...
Friday, January 29, 2010, 1:22PM; posted by jhickey.

A new BETA CentOS 5 image is available. The image id is 'CentOS5'. CentOS provides us longer support than the Fedora images do. Fedora releases are supported for only 13 months after a new version comes out. On the other hand, CentOS 5 is scheduled to be end-of-lifed on March 31, 2014. We are hoping that CentOS will become a more stable and better supported alternative to Fedora. Currently we have the base packages and updates mirrored locally. In the future we hope to track the Extra Packages for Enterprise Linux EPEL project which provides redhat packages of software that is present in Fedora but not in CentOS.

Nortel Fix coming soon (hopefully)...
Tuesday, January 19, 2010, 6:06PM; posted by jhickey.

I heard from Nortel today that the issue plaguing our Nortel switches is expected to be included with a firmware update which will be released in February. Hopefully this will put an end to the problem with vlans not always getting properly created on the Nortel switches.

Switch problems affecting experiment swap-ins...
Thursday, January 14, 2010, 7:12PM; posted by jhickey.

We are experiencing some problems with our switches here at ISI. You may experience swap in problems.

ISI on holiday until 2010-01-04
Sunday, December 27, 2009, 9:34PM; posted by mikeryan.

ISI will be on holiday until 2010-01-04. While we will attempt to respond to issues as promptly as possible, we cannot guarantee problems will resolved until after that date.

Thank you and sorry for the inconvenience.

SSL certificate updated...
Friday, December 11, 2009, 10:39PM; posted by jhickey.

The certificate for has been updated.

Testbed Software Updated...
Friday, December 4, 2009, 8:12PM; posted by jhickey.

We are up and running a more recent snapshot from Emulab. Please let know if anything seems broken.

Upgrading the testbed software on Friday, December 4th at 7PM.
Thursday, December 3, 2009, 4:18PM; posted by jhickey.

We will be updating to an updated Emulab codebase this Friday at 7PM. The testbed will be unavailable during this upgrade.

Kernel bug fixed
Monday, November 30, 2009, 10:56AM; posted by jhickey.

Last week we fixed the kernel memory leak that was causing users to run out of kernel memory about every two days. If you are interested in the details, you can view the FreeBSD problem report w/patch: kern/140853: [nfs] [patch] NFSv2 remove calls fail to send error replies (memory leak!)

users will be rebooting nightly
Monday, November 23, 2009, 3:09PM; posted by mikeryan.

In order to treat some kernel instability problems we will be rebooting users every night at 12:00 AM PST (UTC-8). Access to shell, the web interface, and home directories (within experiments) will be unavailable for the duration of the reboot. If this affects your experiment or logging, please let us know so we can help accomodate you.

Cisco4 back in action.
Monday, November 16, 2009, 5:48PM; posted by jhickey.

We lost a module on Cisco4 which was the connection point for UCB and Nortel10. I have moved the connections to a different module and updated the database, so swap-ins that span multiple machine types should be working again.

Resolver search path issue fixed on users.
Friday, November 13, 2009, 5:28PM; posted by jhickey.

It seems that during the transition from FreeBSD 6.4 to FreeBSD 7.2 the resolver stopped using the search path to resolve hosts when they contained a '.' This means trying to do a ssh node.experiment.project would fail saying that the host name lookup failed. Thankfully, there is an option to control this behavior and we have enabled it in /etc/resolv.conf. From man 5 resolv.conf:

                 options option ...

                 where option is one of the following:

                 debug         sets RES_DEBUG in _res.options.

                 ndots:n       sets a threshold for the number of dots which
                               must appear in a name given to res_query() (see
                               resolver(3)) before an initial absolute query
                               will be made.  The default for n is ``1'',
                               meaning that if there are any dots in a name,
                               the name will be tried first as an absolute
                               name before any search list elements are
                               appended to it.

BOSS updated to FreeBSD 7.2
Friday, November 13, 2009, 5:19PM; posted by jhickey.

During the scheduled downtime this Thursday we upgraded boss to FreeBSD 7.2. So far things seem to be working ok.

USERS upgraded to FreeBSD 7.2.
Thursday, November 12, 2009, 2:52PM; posted by jhickey.

USERS upgraded was to FreeBSD 7.2 since we were having stability issues with 6.4 which were proving hard to track down. We're hoping that either the problem has been fixed or at least we can continue debugging on a more recent version of FreeBSD.

Upgrading USERS to FreeBSD 7.2
Wednesday, November 11, 2009, 5:50PM; posted by jhickey.

Users has been experiencing stability issues. We are upgrading to FreeBSD 7.2 so that we do not end up debugging an issue that has already been fixed. Please let know if anything is not working properly post upgrade. Thanks! The upgrade should be complete by 10PM PST Wednesday, November 11, 2009.

Problems loading WINXP-UPDATE should be fixed.
Monday, November 2, 2009, 6:54PM; posted by jhickey.

An obscure testbed software bug has been preventing WINXP-UPDATE from loading has been found and fixed for the pc3000 machines. Sorry for any inconvenience.

Ubuntu804-STD and ubuntu904-UNSUP images updated
Tuesday, October 27, 2009, 8:49PM; posted by jhickey.

There was an error in the sources.list file pointing to an incorrect repository in the Ubuntu804-STD image. I fixed this and updated both images to the latest packages. Please let us know if you run into any problems with the updated images.

Rebuilding scratch (Fedora and Ubuntu package archives)
Monday, October 26, 2009, 5:49PM; posted by jhickey.

We are rebuilding the scratch server which hosts the fedora packages and ubuntu packages. We expect it to be back online tomorrow.

The swap in issue has been resolved
Friday, October 2, 2009, 12:05PM; posted by mikeryan.

The switch was fixed. Thank you for your patience.

Swap ins may fail intermittently, we are working on a fix
Friday, October 2, 2009, 11:19AM; posted by mikeryan.

We are currently experiencing some difficulties with an errant switch that may cause swap ins to fail. This issue is expected to be resolved within the hour. Please stay tuned.

Packages upgraded on boss and users.
Sunday, September 13, 2009, 7:05PM; posted by jhickey.

The packages installed on boss and users have been updated. If you notice anything strange, please contact testbed-ops.

Downtime this Sunday afternoon, Sept. 13th.
Friday, September 11, 2009, 3:57PM; posted by jhickey.

The testbed will be unavailable Sunday, September 13th for a few hours during the afternoon for some upgrades. Experiments will continue to run and stay swapped in, but I will be turning off access to the testbed while performing the upgrade.

Ubuntu 7.04 packages moved to our local mirror.
Tuesday, April 21, 2009, 1:58PM; posted by jhickey.

The packages that were in /share/ubuntu have been moved to our local package mirror, scratch. There is a new sources list in users:/share/ubuntu to update your existing images with.

Upgrading users kernel...
Wednesday, April 15, 2009, 5:34PM; posted by jhickey.

We had another panic related to NFS exports being modified while in the middle of checking NFS access. I have backported some locking code from the current version of FreeBSD to hopefully address this problem. I will be installing a new kernel during the downtime today.

Testbed News
Monday, April 6, 2009, 3:08PM; posted by jhickey.

I think I have tracked down what was causing users to kernel panic and I have filed a FreeBSD problem report about it (133439). It was related to FreeBSD and nfsd not being SMP safe. I have taken SMP support out of the kernel for the time being. Also, there was some other fallout from the panics related to account that I think has now been taken care of. If you notice anything strange, please let know.

Problems with users
Monday, April 6, 2009, 10:28AM; posted by jhickey.

We have been having some problems with the users node of the testbed. There appears to be a NFS bug that is somehow being tickled and causing a kernel panic. I have configured users to run a debug kernel and to savecore when this happens so that we can track the bug down.

CSet 09 Call for papers
Tuesday, March 31, 2009, 3:58PM; posted by jhickey.

We invite you to submit papers to the Workshop on Cyber Security
Experimentation and Test (CSET'09) on August 10, 2009 in Montreal,
Canada. The CSET'09 workshop is co-located with the USENIX Security

CSET '09 is bringing together researchers and testbed developers to
share their experiences and define a forward-looking agenda for the
development of scientific, realistic evaluation approaches for security
threats and defenses; it provides an important community forum for the
exploration of transformational advances in the field of cyber security
experimentation and test.  

While we particularly invite papers that deal with security
experimentation, we are also interested in papers that address general
testbed/ experiment issues that have implications on security
experimentation such as: traffic and topology generation, large-scale
experiment support, experiment automation, etc. We are further
interested in educational efforts that involve security experimentation.
Please see workshop URL for a more detailed listing of topics.

Financial assistance is expected to be available for promising students
to help defray costs of attending this workshop, present their papers,
and become more integrated into this important scientific community. We
believe that attendance to present papers and to interact with
researchers and practitioners in Cyber Security Experimentation and Test
is an important component of students' education and professional
development. Moreover, students' presence at this workshop will enrich
and broaden the range of workshop activities. Procedures for applying
for a student travel grant are forthcoming.

Workshop URL:

Important Dates

    * Submissions due: May 15, 2009, 11:59 p.m. PDT
    * Notification to authors: June 30, 2009
    * Electronic files due: July 15, 2009

Workshop Organizers

General Chair
Terry V. Benzel, USC Information Sciences Institute (ISI)

Program Co-Chairs
Jelena Mirkovic, USC Information Sciences Institute (ISI)

Angelos Stavrou, George Mason University

Program Committee
1) Paul Barford, University of Wisconsin
2) Andy Bavier, Princeton University
3) Matt Bishop, University of California, Davis
4) Thomas Daniels, Iowa State University
5) Sonia Fahmy, Purdue University
6) Carrie Gates, Computer Associates
7) Alefiya Hussain, SPARTA Inc.
8) Brent Kang, The University of North Carolina at Charlotte
9) Vern Paxson, ICSI
10) Sean Peisert, University of California, Davis
11) Peter Reiher, University of California, Los Angeles
12) Rob Ricci, University of Utah
13) Mark Stamp, San Jose State University
14) Kashi Vishwanath, Microsoft Research
15) Vinod Yegneswaran, SRI International

We hope to see you in Montreal!

CSET'09 Organizers
Terry Benzel (tbenzel at

Angelos Stavrou (astavrou at

Jelena Mirkovic (sunshine at

Upgraded to FreeBSD 6.4 on users and boss
Wednesday, March 25, 2009, 3:31PM; posted by jhickey.

Yesterday we upgraded boss and users to FreeBSD 6.4. So far everything seems to be working without any problems.

Testbed News
Wednesday, November 12, 2008, 3:41PM; posted by jhickey.

Kevin will be testing control net separation again tonight from 5PM until 7PM.

Testing Out Control Net Separation Scheme
Wednesday, November 5, 2008, 12:01PM; posted by lahey.

From 5PM to 7PM this evening, we'll be testing out a new control net separation scheme designed to ensure that all experiments are completely isolated from other experiments, even on the other control network. Our hope is that this change will be completely invisible to users (barring some downtime while we reboot the boss node), but please let us know at if you see problems.

Over the next few weeks we will run a similar series of tests until we finally install the separation scheme as a permanent part of the testbed.

Wednesday, May 28, 2008, 3:52PM; posted by jhickey.

We now have a DETER IRC channel. For more information, go to

Testbed News
Monday, March 24, 2008, 2:01PM; posted by jhickey.

We're organizing CSET workshop on security experimentation, co-located with USENIX Security. Send us lots of papers!

Quick users downtime March 21st at 7:00am PST
Thursday, March 20, 2008, 3:54PM; posted by jhickey.

There will be a quick downtime tomorrow morning at 7am PST. I will be swapping out a bad memory DIMM in users. The downtime should not last more than 10min.

Network Connectivity to Berkeley and in General
Wednesday, March 19, 2008, 9:09AM; posted by jhickey.

Our upstream switch here at ISI was replaced this morning and they are still in the process of configuring it. The link to Berkeley is down at the moment and expect intermittent connectivity problems to ISI.

Monthly DETER User Teleconferences
Tuesday, December 18, 2007, 9:41AM; posted by braden, mirkovic.

We are hosting monthly phone conferences for DETER users to ask questions of the staff, swap issues and solutions, and look for collaborations. All registered DETER users are cordially invited.

The next user call will be on January 10, 2008, 11 am - noon PST. We will send a reminder and agenda one week before the call. Summaries of previous calls will be found at

DETER Community Workshop on Cyber Security and Test 2007 -- Boston, August 6-7, 2007
Wednesday, July 18, 2007, 2:51PM; posted by jhickey.

Join us in Boston, MA, August 67, 2007, for the DETER Community Workshop on Cyber Security Experimentation and Test 2007. This workshop will address issues in the design and use of moderate-to-large scale network testbeds to conduct experiments on security topics such as worm propagation, infrastructure defense (e.g., defending the DNS and BGP routing), and denial of service defense. Such experiments are challenging because of complexity, scale, and possible risk.

Testbed News
Monday, December 18, 2006, 12:26AM; posted by jhickey (modified by sklower).

A 200 node experiment has been scheduled for the week of December 18th. The ISI side of the testbed will be unavailable during that time. The week of December 18th is *this week*.

There are a number of Berkeley nodes free for use. To specifically use Berkeley nodes, in your .ns file you can request

tb-set-hardware $node bpc2800

(or bpc3000, or bpc3060)

It is worth noting that lilo based images are not transportable between bpc2800's and anything else; for image compatibility, the 4 types pc3000, pc3060, bpc3000, bpc3060 are essentialy identical.

Upgrade to FreeBSD-6.1
Wednesday, October 18, 2006, 10:38PM; posted by lahey.

We just upgraded DETER to FreeBSD-6.1 and incorporated a series of improvements from Emulab. Please report all problems (and there no doubt will be some!) to

Idle Timeout Fixed
Friday, October 6, 2006, 9:17AM; posted by lahey.

We have recently fixed a misconfiguration of the DETER testbed that was causing idle experiment detection to fail. Our Cisco and Nortel switches were generating periodic proprietary Ethernet packets, which were registering with the idle system's network traffic counters.

Now that this is fixed, experiments will start to get swapped out after a period of time with no network, tty, or CPU activity. If you wish for your experiments to remain swapped in past the idle time, you can adjust the experiment metadata to prevent idle swap.

For more information, please consult our node use policies.

Downtime On August 15 & August 16
Thursday, August 10, 2006, 1:50PM; posted by lahey.

Tuesday, August 15 and Wednesday, August 16, DETER will be down in order to switchover to a new UPS. We expect to power the systems down Tuesday evening around 10PM, and hope to have them back up sometime after 9AM.

Our expectation is that you can leave your experiments swapped in, and your nodes should come back up when we apply power to the testbed. It would be a good idea to do a 'shutdown -h' on your experimental nodes, to cleanly shut down the systems. As with any operation like this, though, there may be further problems.

In case of problems, Keith Sklower is setting up to mirror the current contents of the ISI machines, so that users could swap in experiments at UCB if necessary.

bpc2800s back on ISI
Monday, June 26, 2006, 12:15PM; posted by lahey.

After some tweaks to the ISI-UCB interconnect, the bpc2800s are now available again from

DETER Power Outage
Saturday, June 24, 2006, 6:17PM; posted by lahey.

DETER experienced an unexpected power outage the morning of Saturday, June 24. It scrambled some switch configurations which took us some time to track down and fix. The testbed should be working again now. Please send mail to if you see problems.

bpc2800s Now Available Via
Wednesday, June 7, 2006, 6:55PM; posted by lahey.

Due to some ongoing difficulties with the bpc2800s, we've decided to make them available via the UCB users node, They will, at least for the next few weeks, not be available via

Keith Sklower is rsyncing the files from the ISI systems onto the UCB systems, so that users should be able to log into the UCB systems with no problems. Please be aware, though, that future rsyncs could overwrite the files stored at UCB, so be careful.

UCB staff have managed to significantly improve the robustness of the serial connections to the bpc2800s, so serial console access should be much improved.

More Nodes Added From UC Berkeley
Monday, May 22, 2006, 7:57AM; posted by lahey.

Keith Sklower of UC Berkeley has added another 30 nodes to the testbed. These bpc2800 nodes will show up as bpc001 - bpc030.

Due to excessive noise in the serial lines, the consoles for these systems have been turned off. You can still run 'console bpc001', but you won't see any output.

As with all Berkeley-based nodes (type bpcxxxx), users should remember that the link between these nodes and the ISI nodes has limited, unpredictable bandwidth, and that this can sometimes effect the speed and reliability of node image loading as well.

DETER Down for Malware Experiments
Friday, May 12, 2006, 11:17AM; posted by lahey.

DETER will be down both Monday, May 15, and Tuesday, May 16, from 2PM to 5PM, for malware experiments. For safety, the testbed (including will be disconnected from the Internet, and all active experiments will be swapped out.

64 New Nodes Available; Software Upgraded
Friday, May 12, 2006, 11:14AM; posted by lahey.

We've got 64 new Dells, similar to the 64 pc3000s already available at DETER, with dual 3.0 GHz Xeon CPUs, 2GB of RAM, and 36GB 15,000 RPM disks. The new systems have larger CPU caches.

62 of the systems have six network interfaces (five experimental interfaces), and are listed on DETER as pc3060s. The two pc3100s have 10 interfaces (nine experimental) to allow for experiments with more complex topologies.

These systems are connected via Nortel switches -- the experimental switch is a stack of seven Nortel 5510s and one Nortel 5530 with dual 10Gb uplinks, while the control switch is made up of two Nortel 5510s. This is similar the switch configuration for our pc3000s and for the Berkeley nodes.

In addition, Keith Sklower reinstalled the Emulab software with a number of his fixes as well as bug fixes from Utah.

The Limitations On The Number Of Links That Cross Switches
Monday, August 8, 2005, 11:46AM; posted by minchoi.

As mentioned in the weekly report #59, experiments have the limitations on the number of links that cross switches and sites. It is due to the way the 'assign' script in the emulab software works. The 'assign' script allocates 100 Mbps per link even though the ns file specifies the link speed as less than 100 Mbps (e.g., 1 Mbps). This means, the number of VLAN's that cross two switches like, Cisco and Nortel at ISI would be limited to 10 even when each link has 1 Mbps as the speed.

To increase the number of VLAN's across the switch boundaries and the two campuses, we set the inter-switch trunk speeds in the testbed database as 4 Gbps, instead of the current real 1 Gbps, so that total of 40, instead of 10 VLAN's can be assigned. The ISI-UCB tunnel speed is set to 1 Gbps, instead of the real 150 Mbps. Doing this would potentially over-subscribe the trunk or the tunnel.

In the diagram on the above URL, the numbers next to the arrows stand for the actual bandwidth of the links, and the numbers next to them in parentheses are the number of VLAN's that can be assigned over the links. Please keep in mind that these links are shared by all the experiments running on the testbed.