The mksysb command create a bootable image of all mounted file systems on the rootvg volume group. The root volume group image is in backup file format, starting with data files and then any optional map files. The image file contains information describing the image installed during the BOS installation process. This information includes the names, sizes, maps and mount points of logical volumes and file systems in the rootvg. Command to take backup of aix operating system in tape drive is
mksysb -e /dev/rmt0.

Continue..

Filed Under:


Reduced time to deployment IBM HPC clustering offers significant price/performance advantages for many high-performance workloads by harnessing the advantages of low cost servers plus innovative, easily available open source software.Today, some businesses are building their own Linux and Microsoft clusters using commodity hardware, standard interconnects and networking technology, open source software, and in-house or third-party applications. Despite the apparent cost advantages offered by these systems, the expense and complexity of assembling, integrating, testing and managing these clusters from disparate, piece-part components often outweigh any benefits gained.IBM has designed the IBM System Cluster 1350 to help address these challenges. Now clients can benefit from IBM’s extensive experience with HPC to help minimize this complexity and risk. Using advanced Intel® Xeon®, AMD Opteron™, and IBM PowerPC® processor-based server nodes, proven cluster management software and optional high-speed interconnects, the Cluster 1350 offers the best of IBM and third-party technology. As a result, clients can speed up installation of an HPC cluster, simplify its management, and reduce mean time to payback.The Cluster 1350 is designed to be an ideal solution for a broad range of application environments, including industrial design and manufacturing, financial services, life sciences, government and education. These environments typically require excellent price/performance for handling high performance computing (HPC) and business performance computing (BPC) workloads. It is also an excellent choice for applications that require horizontal scaling capabilities, such as Web serving and collaboration.
Common features
Hardware summary
Rack-optimized Intel Xeon dual-core and quad-core and AMD Opteron processor-based servers
Intel Xeon, AMD and PowerPC processor-based blades
Optional high capacity IBM System Storage™ DS3200, DS3400, DS4700, DS4800 and EXP3000 Storage Servers and IBM System Storage EXP 810 Storage Expansion
Industry-standard Gigabit Ethernet cluster interconnect
Optional high-performance Myrinet-2000 and Myricom 10g cluster interconnect
Optional Cisco, Voltaire, Force10 and PathScale InfiniBand cluster interconnects
Clearspeed Floating Point Accelerator
Terminal server and KVM switch
Space-saving flat panel monitor and keyboard
Runs with RHEL 4 or SLES 10 Linux operating systems or Windows Compute Cluster Server
Robust cluster systems management and scalable parallel file system software
Hardware installed and integrated in 25U or 42U Enterprise racks
Scales up to 1,024 cluster nodes (larger systems and additional configurations available—contact your IBM representative or IBM Business Partner)
Optional Linux cluster installation and support services from IBM Global Services or an authorized partner or distributor
Clients must obtain the version of the Linux operating system specified by IBM from IBM, the Linux Distributor or an authorized reseller
x3650—dual core up to 3.0 GHz, quad core up to 2.66
x3550—dual core up to 3.0 GHz, quad core up to 2.66
x3455—dual core up to 2.8 GHz
x3655—dual core up to 2.6 GHz
x3755—dual core up to 2.8 GHz
HS21—dual core up to 3.0 GHz, quad core up to 2.66
HS21 XM—dual core up to 3.0 GHz, quad core up to 2.33
JS21—2.7/2.6 GHz*; 2.5/2.3 GHz*
LS21—dual core up to 2.6 GHz
LS41—dual core up to 2.6 GHz
QS20—multi-core 3.2 GHz

Continue..

Filed Under:


IBM System Cluster 1600 systems are comprised of IBM POWER5™ and POWER5+™ symmetric multiprocessing (SMP) servers running AIX 5L™ or Linux®. Cluster 1600 is a highly scalable cluster solution for large-scale computational modeling and analysis, large databases and business intelligence applications and cost-effective datacenter, server and workload consolidation. Cluster 1600 systems can be deployed on Ethernet networks, InfiniBand networks, or with the IBM High Performance Switch and are typically managed with Cluster Systems Management (CSM) software, a comprehensive tool designed specifically to streamline initial deployment and ongoing management of cluster systems.
Common features
·
Highly scalable AIX 5L or Linux cluster solutions for large-scale computational modeling, large databases and cost-effective data center, server and workload consolidation
·
Cluster Systems Management (CSM) software for comprehensive, flexible deployment and ongoing management
·
Cluster interconnect options: industry standard 1/10Gb Ethernet (AIX 5L or Linux), IBM High Performance Switch (AIX 5L and CSM) SP Switch2 (AIX 5L and PSSP); 4x/12x InfiniBand (AIX 5L or SLES 9); or Myrinet (Linux)
·
Operating system options: AIX 5L Version 5.2 or 5.3, SUSE Linux Enterprise Server 8 or 9, Red Hat Enterprise Linux 4
·
Complete software suite for creating, tuning and running parallel applications: Engineering & Scientific Subroutine Library (ESSL), Parallel ESSL, Parallel Environment, XL Fortran, VisualAge C++
·
High-performance, high availability, highly scalable cluster file system General Parallel File System (GPFS)
·
Job scheduling software to optimize resource utilization and throughput: LoadLeveler®
·
High availability software for continuous access to data and applications: High Availability Cluster Multiprocessing (HACMP™)
Hardware summary
·
Mix and match IBM POWER5 and POWER5+ servers:
·
IBM System p5™ 595, 590, 575, 570, 560Q, 550Q, 550, 520Q, 520, 510Q, 510, 505Q and 505
·
IBM eServer™ p5 595, 590, 575, 570, 550, 520, and 510
·
Up to 128 servers or LPARs (AIX 5L or Linux operating system images) per cluster depending on hardware; higher scalability by special order

Continue..

Filed Under:


* Advanced IBM POWER6™ processor cores for enhanced performance and reliability* Building block architecture delivers flexible scalability and modular growth* Advanced virtualization features facilitate highly efficient systems utilization* Enhanced RAS features enable improved application availabilityThe IBM POWER6 processor-based System p™ 570 mid-range server delivers outstanding price/performance, mainframe-inspired reliability and availability features, flexible capacity upgrades and innovative virtualization technologies. This powerful 19-inch rack-mount system, which can handle up to 16 POWER6 cores, can be used for database and application serving, as well as server consolidation. The modular p570 is designed to continue the tradition of its predecessor, the IBM POWER5+™ processor-based System p5™ 570 server, for resource optimization, secure and dependable performance and the flexibility to change with business needs. Clients have the ability to upgrade their current p5-570 servers and know that their investment in IBM Power Architecture™ technology has again been rewarded.The p570 is the first server designed with POWER6 processors, resulting in performance and price/performance advantages while ushering in a new era in the virtualization and availability of UNIX® and Linux® data centers. POWER6 processors can run 64-bit applications, while concurrently supporting 32-bit applications to enhance flexibility. They feature simultaneous multithreading,1 allowing two application “threads” to be run at the same time, which can significantly reduce the time to complete tasks.The p570 system is more than an evolution of technology wrapped into a familiar package; it is the result of “thinking outside the box.” IBM’s modular symmetric multiprocessor (SMP) architecture means that the system is constructed using 4-core building blocks. This design allows clients to start with what they need and grow by adding additional building blocks, all without disruption to the base system.2 Optional Capacity on Demand features allow the activation of dormant processor power for times as short as one minute. Clients may start small and grow with systems designed for continuous application availability.Specifically, the System p 570 server provides:Common features Hardware summary* 19-inch rack-mount packaging* 2- to 16-core SMP design with building block architecture* 64-bit 3.5, 4.2 or 4.7 GHz POWER6 processor cores* Mainframe-inspired RAS features* Dynamic LPAR support* Advanced POWER Virtualization1 (option)o IBM Micro-Partitioning™ (up to 160 micro-partitions)o Shared processor poolo Virtual I/O Servero Partition Mobility2* Up to 32 optional I/O drawers* IBM HACMP™ software support for near continuous operation** Supported by AIX 5L (V5.2 or later) and Linux® distributions from Red Hat (RHEL 4 Update 5 or later) and SUSE Linux (SLES 10 SP1 or later) operating systems* 4U 19-inch rack-mount packaging* One to four building blocks* Two, four, eight, 12 or 16 3.5 GHz, 4.2 GHz or 4.7 GHz 64-bit POWER6 processor cores* L2 cache: 8 MB to 64 MB (2- to 16-core)* L3 cache: 32 MB to 256 MB (2- to 16-core)* 2 GB to 192 GB of 667 MHz buffered DDR2 or 16 GB to 384 GB of 533 MHz buffered DDR2 or 32 GB to 768 GB of 400 MHz buffered DDR2 memory3* Four hot-plug, blind-swap PCI Express 8x and two hot-plug, blind-swap PCI-X DDR adapter slots per building block* Six hot-swappable SAS disk bays per building block provide up to 7.2 TB of internal disk storage* Optional I/O drawers may add up to an additional 188 PCI-X slots and up to 240 disk bays (72 TB additional)4* One SAS disk controller per building block (internal)* One integrated dual-port Gigabit Ethernet per building block standard; One quad-port Gigabit Ethernet per building block available as optional upgrade; One dual-port 10 Gigabit Ethernet per building block available as optional upgrade* Two GX I/O expansion adapter slots* One dual-port USB per building block* Two HMC ports (maximum of two), two SPCN ports per building block* One optional hot-plug media bay per building block* Redundant service processor for multiple building block systems2

Continue..

Filed Under:


Mirror Write Consistency (MWC) ensures data consistency on logical volumes in case asystem crash occurs during mirrored writes. The active method achieves this by loggingwhen a write occurs. LVM makes an update to the MWC log that identifies what areas ofthe disk are being updated before performing the write of the data. Records of the last 62distinct logical transfer groups (LTG) written to disk are kept in memory and also written toa separate checkpoint area on disk (MWC log). This results in a performance degradationduring random writes.With AIX V5.1 and later, there are now two ways of handling MWC:• Active, the existing method• Passive, the new method

Continue..

Filed Under:


The objectives of the Web-based System Manager are:• Simplification of AIX administration by a single interface• Enable AIX systems to be administered from almost any client platform with a browserthat supports Java 1.3 or use downloaded client code from an AIX V5.3 code• Enable AIX systems to be administered remotely• Provide a system administration environment that provides a similar look and feel to theWindows NT/2000/XP, LINUX and AIX CDE environmentsThe Web-based System Manager provides a comprehensive system managementenvironment and covers most of the tasks in the SMIT user interface. The Web-basedSystem Manager can only be run from a graphics terminal so SMIT will need to be used inthe ASCII environment.To download Web-based System Manager Client code from an AIX host use the addresshttp:///remote_client.htmlSupported Microsoft Windows clients for AIX 5.3 are Windows 2000 Professional version,Windows XP Professional version, or Windows Server 2003.Supported Linux clients are PCs running: Red Hat Enterprise Version 3, SLES 8, SLES 9,Suse 8.0, Suse 8.1, Suse 8.2, and Suse 9.0 using desktops KDE or GNOME only.The PC Web-based System Manager Client installation needs a minimum of 300 MB freedisk space, 512 MB memory (1GB preferred) and a 1 GHZ cpu.

Continue..

Filed Under:


/var/adm/sulog Switch user log file (ASCII file). Use cat, pg ormore to view it and rm to clean it out./etc/security/failedlogin Failed logins from users. Use the who commandto view the information. Use "cat /dev/null >/etc/failedlogin" to empty it,/var/adm/wtmp All login accounting activity. Use the whocommand to view it use "cat /dev/null >/var/adm/wtmp" to empty it./etc/utmp Who has logged in to the system. Use the whocommand to view it. Use "cat /dev/null >/etc/utmp" to empty it./var/spool/lpd/qdir/* Left over queue requests/var/spool/qdaemon/* temp copy of spooled files/var/spool/* spooling directorysmit.log smit log file of activitysmit.script smit log

Continue..

Filed Under:


What is an LVM hot spare?
A hot spare is a disk or group of disks used to replace a failing disk. LVM marks a physicalvolume missing due to write failures. It then starts the migration of data to the hot sparedisk.Minimum hot spare requirementsThe following is a list of minimal hot sparing requirements enforced by the operatingsystem.- Spares are allocated and used by volume group- Logical volumes must be mirrored- All logical partitions on hot spare disks must be unallocated- Hot spare disks must have at least equal capacity to the smallest disk alreadyin the volume group. Good practice dictates having enough hot spares tocover your largest mirrored disk.Hot spare policyThe chpv and the chvg commands are enhanced with a new -h argument. This allows youto designate disks as hot spares in a volume group and to specify a policy to be used in thecase of failing disks.The following four values are valid for the hot spare policy argument (-h):Synchronization policyThere is a new -s argument for the chvg command that is used to specify synchronizationcharacteristics.The following two values are valid for the synchronization argument (-s):ExamplesThe following command marks hdisk1 as a hot spare disk:# chpv -hy hdisk1The following command sets an automatic migration policy which uses the smallest hotspare that is large enough to replace the failing disk, and automatically tries to synchronizestale partitions:# chvg -hy -sy testvgArgument Descriptiony (lower case)Automatically migrates partitions from one failing disk to one sparedisk. From the pool of hot spare disks, the smallest one which is bigenough to substitute for the failing disk will be used.Y (upper case)Automatically migrates partitions from a failing disk, but might usethe complete pool of hot spare disks.nNo automatic migration will take place. This is the default value for avolume group.rRemoves all disks from the pool of hot spare disks for this volume

Continue..

Filed Under:

Followers

free counters