Networker

Re: [Networker] NSR 75SP3 : Stable for prod ?

2010-09-06 08:41:11
Subject: Re: [Networker] NSR 75SP3 : Stable for prod ?
From: "Bergmann, Chr. Carl" <chbe AT RISOE.DTU DOT DK>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Mon, 6 Sep 2010 14:40:25 +0200
Anyone have a link for patched savegrp binary for RedHat AS rel. 4 32 bit?
After upgrading to 7.5.3 (533) groups with 50 clients can be up to 90 min to 
start.

        
Chr. Carl Bergmann      
System manager  
IT Service Department   
Risø DTU 
        
Technical University of Denmark

 dtu    
Risø National Laboratory for Sustainable Energy         
Frederiksborgvej 399, P.O. Box 49       
Building 113    
4000  Roskilde  
Direct +45 4677 5550    
chbe AT risoe.dtu DOT dk        
www.risoe.dtu.dk        

 




-----Original Message-----
From: EMC NetWorker discussion [mailto:NETWORKER AT LISTSERV.TEMPLE DOT EDU] On 
Behalf Of Jóhannes Karl Karlsson
Sent: Monday, August 30, 2010 11:39 AM
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Subject: Re: [Networker] NSR 75SP3 : Stable for prod ?

We had some problems with 7.5.3 to begin with when we installed build 514. 
Groups of Oracle clients not finishing properly (hanging).

EMC then released build 531 and few days later build 533. We installed 
NetWorker 7.5.3.1 build 533 and our problems got even worse.

EMC then released a patched version of savegrp.exe build 533. After installing 
that patched savegrp.exe binary we have not had any problems.

NetWorker 7.5.3.1 build 533 with patched savegrp.exe binary seems to be stable 
and good.

Johannes



-----Original Message-----
From: EMC NetWorker discussion [mailto:NETWORKER AT LISTSERV.TEMPLE DOT EDU] On 
Behalf Of Len Philpot
Sent: 17. ágúst 2010 15:26
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Subject: Re: [Networker] NSR 75SP3 : Stable for prod ?

> STANLEY R. HORWITZ 
> 
> What ulimit settings are you using and how many clients are you backing 
up?
> 
> Here's what I have …
> 
> [root@puss nsr_scripts]# ulimit -a
> core file size          (blocks, -c) 0
> data seg size           (kbytes, -d) unlimited
> file size               (blocks, -f) unlimited
> pending signals                 (-i) 1024
> max locked memory       (kbytes, -l) 32
> max memory size         (kbytes, -m) unlimited
> open files                      (-n) 1024
> pipe size            (512 bytes, -p) 8
> POSIX message queues     (bytes, -q) 819200
> stack size              (kbytes, -s) 10240
> cpu time               (seconds, -t) unlimited
> max user processes              (-u) 143360
> virtual memory          (kbytes, -v) unlimited
> file locks                      (-x) unlimited

Your's looks like Solaris 10, but this is on 9 (SPARC):

# ulimit -a
core file size (blocks)     unlimited
data seg size (kbytes)      unlimited
file size (blocks)          unlimited
open files                  unlimited
pipe size (512 bytes)       10
stack size (kbytes)         8192
cpu time (seconds)          unlimited
max user processes          29995
virtual memory (kbytes)     unlimited

The two groups that were abending had 41 and 25 clients each (not huge) 
and we have a little over 100 clients total. However, the old ulimit 
settings (which I don't recall) were from the original Solaris 8 
installation back in 2003 (Networker 6.1). So, they weren't exactly big.

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the body of the email. Please write to 
networker-request AT listserv.temple DOT edu if you have any problems with this 
list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the body of the email. Please write to 
networker-request AT listserv.temple DOT edu if you have any problems with this 
list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

<Prev in Thread] Current Thread [Next in Thread>