Sunday, February 19, 2012

Analyze Backup Server

I am trying to analyze/identify the bottleneck on our new backup
server. I was wondering if someone could recommend some performance
counters to watch.
Right now I am watching "% Write Time" I am averaging around
"500" for this measure while copying a 50GB database file. It is
taking almost twice as long to copy the file to a different server as
opposed to a different direct attached array. The % disk write counter
is similar on both servers. Am I wrong to assume that the physical
disks are the bottleneck (due to high performance counter ratings)?
Does anyone know some general guide lines on real world throughput on
the following? What kinds of things should I be looking at and/or
asking our IT department?
Gigabit dedicated network
Perc4 (I read on Dells site that it is 320MB/s which translates to
1.1TB / hour)
Raid 5 using 6 - 300MB disks (10,000 RPM)
Other bottleneck candidate?
Server Configuration
Windows Server 2003
Dell PowerEdge 2850
Perc4 - Raid 5
PowerVault 200s
Gigabit networkdaveg.01@.gmail.com wrote:
> I am trying to analyze/identify the bottleneck on our new backup
> server. I was wondering if someone could recommend some performance
> counters to watch.
> Right now I am watching "% Write Time" I am averaging around
> "500" for this measure while copying a 50GB database file. It is
> taking almost twice as long to copy the file to a different server as
> opposed to a different direct attached array. The % disk write counter
> is similar on both servers. Am I wrong to assume that the physical
> disks are the bottleneck (due to high performance counter ratings)?
>
> Does anyone know some general guide lines on real world throughput on
> the following? What kinds of things should I be looking at and/or
> asking our IT department?
> Gigabit dedicated network
> Perc4 (I read on Dells site that it is 320MB/s which translates to
> 1.1TB / hour)
> Raid 5 using 6 - 300MB disks (10,000 RPM)
> Other bottleneck candidate?
>
> Server Configuration
> Windows Server 2003
> Dell PowerEdge 2850
> Perc4 - Raid 5
> PowerVault 200s
> Gigabit network
>
5 of 6 disks at 60MB/sec is 300MB/sec for sustained transfer at
the disk level. You might verify that the disk transfer rate
is full Ultra320. A cabling or termination problem might
reduce the transfer rate.
The PERC4 listed here, has a 64 bit 66MHz interface, and as long
as it is fully utilizing the bus, the bus should not be a limit.
Sometimes a bus segment can be slowed, by the presence of a slower
card. So you may want to check the card configuration and bus
structure of your server.
http://www.jjwei.com/shop/item.asp?itemid=78
But your Gigabit Ethernet only does 125MB/sec theoretical in one
direction, so a single transaction with your server will be limited
by the network. A local transfer on the server itself could go faster.
Plenty of little things to check.
Paul|||Assuming everything is configured properly, how long would I expect it
to take to transfer a 50GB file to the backup server?
If the perfmon coutner "%disk write" goes above 100 should I assume
that the physical disks are the bottleneck?
Paul wrote:
> daveg.01@.gmail.com wrote:
> > I am trying to analyze/identify the bottleneck on our new backup
> > server. I was wondering if someone could recommend some performance
> > counters to watch.
> >
> > Right now I am watching "% Write Time" I am averaging around
> > "500" for this measure while copying a 50GB database file. It is
> > taking almost twice as long to copy the file to a different server as
> > opposed to a different direct attached array. The % disk write counter
> > is similar on both servers. Am I wrong to assume that the physical
> > disks are the bottleneck (due to high performance counter ratings)?
> >
> >
> > Does anyone know some general guide lines on real world throughput on
> > the following? What kinds of things should I be looking at and/or
> > asking our IT department?
> >
> > Gigabit dedicated network
> > Perc4 (I read on Dells site that it is 320MB/s which translates to
> > 1.1TB / hour)
> > Raid 5 using 6 - 300MB disks (10,000 RPM)
> > Other bottleneck candidate?
> >
> >
> > Server Configuration
> > Windows Server 2003
> > Dell PowerEdge 2850
> > Perc4 - Raid 5
> > PowerVault 200s
> > Gigabit network
> >
> 5 of 6 disks at 60MB/sec is 300MB/sec for sustained transfer at
> the disk level. You might verify that the disk transfer rate
> is full Ultra320. A cabling or termination problem might
> reduce the transfer rate.
> The PERC4 listed here, has a 64 bit 66MHz interface, and as long
> as it is fully utilizing the bus, the bus should not be a limit.
> Sometimes a bus segment can be slowed, by the presence of a slower
> card. So you may want to check the card configuration and bus
> structure of your server.
> http://www.jjwei.com/shop/item.asp?itemid=78
> But your Gigabit Ethernet only does 125MB/sec theoretical in one
> direction, so a single transaction with your server will be limited
> by the network. A local transfer on the server itself could go faster.
> Plenty of little things to check.
> Paul|||Dave wrote:
> Assuming everything is configured properly, how long would I expect it
> to take to transfer a 50GB file to the backup server?
> If the perfmon coutner "%disk write" goes above 100 should I assume
> that the physical disks are the bottleneck?
>
> Paul wrote:
> > daveg.01@.gmail.com wrote:
> > > I am trying to analyze/identify the bottleneck on our new backup
> > > server. I was wondering if someone could recommend some performance
> > > counters to watch.
> > >
> > > Right now I am watching "% Write Time" I am averaging around
> > > "500" for this measure while copying a 50GB database file. It is
> > > taking almost twice as long to copy the file to a different server as
> > > opposed to a different direct attached array. The % disk write counter
> > > is similar on both servers. Am I wrong to assume that the physical
> > > disks are the bottleneck (due to high performance counter ratings)?
> > >
> > >
> > > Does anyone know some general guide lines on real world throughput on
> > > the following? What kinds of things should I be looking at and/or
> > > asking our IT department?
> > >
> > > Gigabit dedicated network
> > > Perc4 (I read on Dells site that it is 320MB/s which translates to
> > > 1.1TB / hour)
> > > Raid 5 using 6 - 300MB disks (10,000 RPM)
> > > Other bottleneck candidate?
> > >
> > >
> > > Server Configuration
> > > Windows Server 2003
> > > Dell PowerEdge 2850
> > > Perc4 - Raid 5
> > > PowerVault 200s
> > > Gigabit network
> > >
> >
> > 5 of 6 disks at 60MB/sec is 300MB/sec for sustained transfer at
> > the disk level. You might verify that the disk transfer rate
> > is full Ultra320. A cabling or termination problem might
> > reduce the transfer rate.
> >
> > The PERC4 listed here, has a 64 bit 66MHz interface, and as long
> > as it is fully utilizing the bus, the bus should not be a limit.
> > Sometimes a bus segment can be slowed, by the presence of a slower
> > card. So you may want to check the card configuration and bus
> > structure of your server.
> >
> > http://www.jjwei.com/shop/item.asp?itemid=78
> >
> > But your Gigabit Ethernet only does 125MB/sec theoretical in one
> > direction, so a single transaction with your server will be limited
> > by the network. A local transfer on the server itself could go faster.
> > Plenty of little things to check.
> >
> > Paul|||I am trying to analyze/identify the bottleneck on our new backup
server. I was wondering if someone could recommend some performance
counters to watch.
Right now I am watching "% Write Time" I am averaging around
"500" for this measure while copying a 50GB database file. It is
taking almost twice as long to copy the file to a different server as
opposed to a different direct attached array. The % disk write counter
is similar on both servers. Am I wrong to assume that the physical
disks are the bottleneck (due to high performance counter ratings)?
Does anyone know some general guide lines on real world throughput on
the following? What kinds of things should I be looking at and/or
asking our IT department?
Gigabit dedicated network
Perc4 (I read on Dells site that it is 320MB/s which translates to
1.1TB / hour)
Raid 5 using 6 - 300MB disks (10,000 RPM)
Other bottleneck candidate?
Server Configuration
Windows Server 2003
Dell PowerEdge 2850
Perc4 - Raid 5
PowerVault 200s
Gigabit network|||Dave wrote:
> Assuming everything is configured properly, how long would I expect it
> to take to transfer a 50GB file to the backup server?
> If the perfmon coutner "%disk write" goes above 100 should I assume
> that the physical disks are the bottleneck?
If the transfer is over the network, the limit is 125MB/sec. To transfer
a 50GB file takes 400 seconds.
Paul
> Paul wrote:
>> daveg.01@.gmail.com wrote:
>> I am trying to analyze/identify the bottleneck on our new backup
>> server. I was wondering if someone could recommend some performance
>> counters to watch.
>> Right now I am watching "% Write Time" I am averaging around
>> "500" for this measure while copying a 50GB database file. It is
>> taking almost twice as long to copy the file to a different server as
>> opposed to a different direct attached array. The % disk write counter
>> is similar on both servers. Am I wrong to assume that the physical
>> disks are the bottleneck (due to high performance counter ratings)?
>>
>> Does anyone know some general guide lines on real world throughput on
>> the following? What kinds of things should I be looking at and/or
>> asking our IT department?
>> Gigabit dedicated network
>> Perc4 (I read on Dells site that it is 320MB/s which translates to
>> 1.1TB / hour)
>> Raid 5 using 6 - 300MB disks (10,000 RPM)
>> Other bottleneck candidate?
>>
>> Server Configuration
>> Windows Server 2003
>> Dell PowerEdge 2850
>> Perc4 - Raid 5
>> PowerVault 200s
>> Gigabit network
>> 5 of 6 disks at 60MB/sec is 300MB/sec for sustained transfer at
>> the disk level. You might verify that the disk transfer rate
>> is full Ultra320. A cabling or termination problem might
>> reduce the transfer rate.
>> The PERC4 listed here, has a 64 bit 66MHz interface, and as long
>> as it is fully utilizing the bus, the bus should not be a limit.
>> Sometimes a bus segment can be slowed, by the presence of a slower
>> card. So you may want to check the card configuration and bus
>> structure of your server.
>> http://www.jjwei.com/shop/item.asp?itemid=78
>> But your Gigabit Ethernet only does 125MB/sec theoretical in one
>> direction, so a single transaction with your server will be limited
>> by the network. A local transfer on the server itself could go faster.
>> Plenty of little things to check.
>> Paul
>|||On Sun, 14 Jan 2007 06:54:21 -0500, Paul <nospam@.needed.com>
wrote:
>Dave wrote:
>> Assuming everything is configured properly, how long would I expect it
>> to take to transfer a 50GB file to the backup server?
>> If the perfmon coutner "%disk write" goes above 100 should I assume
>> that the physical disks are the bottleneck?
>If the transfer is over the network, the limit is 125MB/sec. To transfer
>a 50GB file takes 400 seconds.
> Paul
>
I'd be surprised if a real transfer averages over 90MB/s,
especially if TCP/IP

No comments:

Post a Comment