@make{report}
@device(x2700)
@flushleft[Edinburgh  Regional  Computing  Centre]
@Majorheading[Proposal to Enhance the Edinburgh University Network]
@subheading{Introduction}
In 1983 the Board funded the purchase of two large GEC Campus Packet
Switches (CPSEs) to form the nucleus of the Edinburgh University X.25 network thus
replacing the older RCO network.  This equipment enabled the University 
to connect up a total of 62 hosts, terminal concentrators and gateways,
including a link to JANET.

The Board will recall that one of the problems that Edinburgh University
faces is a split between two sites some two kilometres apart.  It was therefore
necessary to install two switches, one on the King's Buildings science campus 
site and the other located in the main University area in George Square.
@subheading{Current Network}
Since 1983, the University has funded the gradual enhancement of the two
main switches up to close to their potential maximum size.  In addition to
this, the University has also upgraded and extended the smaller GEC switch
which formed the pilot X.25 network.  All three switches are now close
to their permitted maximum number of connections.  A diagram showing the 
configuration of the Edinburgh X.25 network is
attached.  In total it supports approximately 120 synchronous connections.
These connections support over 30 hosts (mainframes and multi user minis)
and nearly 60 terminal concentrators with an overall total of more than 2000
asynchronous terminals.  Because of an acute shortage of ports on the CPSEs
a number of the connections have been supported by attaching them into CAMTEC
PADs acting as switches, but these PADs have not proved entirely satisfactory
mainly because of the limited  speed of connection possible between a PAD
and a CPSE.
@subheading{Current Problems}
The problems we face concern connectability (the ability to link
extra hosts and PADs into the network) and reliability.  We are under heavy
pressure to provide additional connections to  both the George Square 
and King's Buildings CPSEs.  Some of this is because of the increasing number of hosts, and some
because of the gradual replacement of the original PDP11 based terminal
concentrators by a much greater number of CAMTEC PADs. (The original concentrators 
could support more terminal connections than a PAD.)  It had been hoped
to install two CAMTEC switchpads to relieve this demand and these have been 
on order for over a year.  Present evidence suggests, however, that it would be unwise to rely on
those for network expansion, at least in the current academic year.

A further cause for concern has been the overall unreliability of the 
existing GEC packet switches.  The MTBF of each of the large switches 
during the past year has rarely risen above 600 hours against the expected 
2000 hours MTBF.  We have of course indicated our concern to GEC;  they point out
that our switches are fully configured and heavily loaded - our busiest CPSE
switches about 7 M packets/day and carries about 420 Mbytes of data/day -
and that we tend to encounter problems before other sites.  GEC can reproduce
a particular problem when it is observed, but cannot simulate the varied
load we place on their switches, and so cannot guarantee to detect the
problems before they occur in the field.  No doubt there is considerable 
truth in this, and no doubt it is helpful to other users of GEC switches
that we are often the first to hit a problem.  It is not, however, helpful
to our user service, which has been seriously affected.  A CPSE constitutes
a single point of failure for a substantial part of the network.  This is 
especially serious for users in the Central Area of the University, where
there is a single CPSE for the entire community, and virtually no ability 
to fall back in the case of failure.   
@subheading[The Future]
It is clear that an extra switch is now necessary in the George Square area
both to meet the demand for more connections and to give some backup
capability both for the switch and for the inter-switch links. 
The proposal is to move the smaller GEC switch from the King's
Buildings area to the George Square area and to purchase a new switch for the
King's Buildings area.

We had considered creating a system of satellite switches feeding the
existing GEC switches on kilostream speed links.  This would have enabled the
University to make use of the throughput capability of the GEC switches
(the limit we are hitting is one of connectability) by connecting
the low speed links into high speed links.  This strategy has had to be
rejected as it introduces additional points of failure and, as the network expands,
channels more and more of the network through the existing three switches.
The proposed strategy is to create a distributed system utilising fully inter-connected
switches. The failure of one of main switching nodes can be coped with by a
combination of automatic re-routing or, in the case of users with critical
service requirements, some manual re-routing of connections.  This would
not provide a complete solution to the problems of switch failure - only
extensive duplication would do that - but it alleviates some of the most
serious.
@subheading{The Proposal}
It is proposed that a TelePAC switch should be installed to meet the need
for additional connections and better resilience, and to act as a pilot site for
this particular switch for the community.  The TelePAC has been choosen as it is a modern architecture with
physically only a small number of boards, thus having, theoretically at
least, a better hardware MTBF than the older, larger GEC switches.  In
addition to this it has a throughput on a 30 line system which at least will
equal that of the largest GEC switch.  It also has a cost advantage over the
GEC switches that will make it cost effective in the long term to install multiple
TelePACs rather than the larger GEC switches.  The intention will be to
gradually transfer the existing lines from the small GEC switch to the TelePAC as
confidence is gained in the hardware and the software.  We would propose 
to report on our experiences with this switch, from the stand point of
hardware, software and the overall support service, at the end of April.
@subheading[Utilisation]
The proposed method of linking the TelePAC switch into the network is
shown in Fig 2, though the various host and PAD connections will, as already
indicated, be connected in stages.  In particular we propose to provide
direct links to the following:-
@begin[itemize]
three GEC CPSEs (48 kb/s)

network filestore (FSTORE) (38.4 kb/s)

three EMAS hosts (2976s, 2988, Amdahl) (38.4 kb/s)

Gould PN9080 Unix host

two GEC 63/40 Unix hosts

VMS package service host

15 existing PADs
@end[itemize]
The effect of this will be to free ports on the GEC CPSEs and
to ensure an effective load to check out the TelePAC switch.
@subheading{Summary of Equipment Required}
The total equipment required is as follows. (all costs include VAT.)
@begin[verbatim]
   a) TelePAC T 3639 - 101/30:
      TelePAC HPE-3,30 port high speed network
      processor (rack mounting) comprising;

       1 x T3639-400 CPU Module
      15 x T3639 401 DMA Module
       1 x T3639-402 64K Battery backed memory Module.
       1 x T3639-403 Display Module
       1 x T3639-413 2.0 Mb Memory expansion module
       4 x T3639-408 V35 interface Interface module          
      26 x T3639-405 V24 Line Interface module               26,277

   b) T3639-409 Dual Floppy Disc Controller with
      two drives                                              1,006
                                                             -------
                                                     Total @T{#} 27,282
                                                             =======

      Annual maintenance cost  @T{#} 2,728
                                ======

                                                       B. A. C. Gilmore
                                                         February 1985.
@end[verbatim]