Sunday, December 9, 2007

我的投资策略

1. 只买不卖,长期投资.
在熊市中持续买入优质股,坚持持有至20年后才等待机会卖出.
熊市定义:从高出下跌50%以上,每半年买一次,每次2K - 10K.
板块优质股:
a. 金融
1) 000001 深发展 2) 002142 宁波银行 3) 601009 南京银行 4) 601166 兴业银行 5) 601169 北京银行
6) 601318 中国平安 7) 601628 中国人寿 8) 601328 交通银行 9) 601998 中信银行

b. 地产
1) 000002 万科 2) 000024 招商地产 3) 600113 浙江东日 4) 600193 创兴置业 5) 600246 万通地产
6) 600393 东华实业 7) 600463 空港股份 8) 600641 万业企业 9) 600675 中华企业
10) 600606 金丰投资

c. 能源,电力,资源
1) 000096 广聚能源 2) 000534 汕电力 3) 000939 凯迪电力 4) 600098 广州控股 5) 600864 岁宝热电
6) 600886 国投电力 7) 600969 彬电国际 8) 000060 中金南岭 9) 000612 焦作万方 10) 002082 栋梁新材
11) 002114 罗平锌电 12) 600255 鑫科材料 13) 600432 吉恩镍业 14) 600497 驰宏锌锗 15) 600558 大西洋

d. 生物医药
1) 000513 丽珠集团 2) 000623 吉林敖东 3) 000919 金陵药业 4) 000989 九芝堂 5) 002004 华邦制药
6) 600085 同仁堂 7) 600380 健康元 8) 600993 马应龙

e. 其它
1) 600616 第一食品 2) 000039 中集集团


2. 买卖ETF, 跟着感觉买指数.



3. 短期投机, 关键在于保住本金-Cash is the king!
买入信号:
RSI < 20, 获利10%-20%即出, 这里KDJ中 J < 0 信号有时会先于RSI信号出现,
内盘远远大于外盘(20%),表示庄家买入,跟进,获利5%-10%即出.




Tuesday, December 4, 2007

盘口变化盯庄家

挂单心理战

  在买盘或卖盘中挂出巨量的买单或卖单,纯粹是庄家引导股价朝某方向走的一种方法。其目的一但达到,挂单就撤消。这也是庄家用来引导股价的常用手法。当 买盘挂出巨单时,卖方散户心理会产生压力。是否如此大单买进,股价还会看高一线?相反,当卖盘挂出巨单时,买方散户会望而却步。斗智斗勇在盘口,散户只须 胆大心细,遇事不慌,信仰坚定,暂死跟庄。

  内外盘中显真情

  内盘是以买盘价成交的成交量,也就是卖方的成交量。

  外盘是以卖盘价成的成交量,也就是买方的成交量。庄家只要进出,是很难逃脱内外盘盘口的。虽可用对敲盘暂时迷惑人,但庄家大量筹码的进出,必然会从内外盘中流露出来。

  隐形盘中露马脚

  盘口中,买卖盘面所放的挂单,往往是庄家骗人用的假象。大量的卖盘挂单俗称上盖板,大量的买盘挂单俗称下托板。而真正庄家目的性买卖盘通常是及时成交 的,隐形盘虽在买卖盘口看不到,但在成交盘中是跑不了的。因此,研究隐形盘的成交与挂单的关系,就可看清庄家的嘴脸。

  盘口秘诀

  (1) 上有盖板,而出现大量隐形外盘,股价不跌,为大幅上涨的先兆。

  (2) 下有托板,而出现大量隐形内盘,为庄家出货迹象。

  (3) 外盘大于内盘,股价不上涨,警惕庄家出货。

  (4) 内盘大于外盘,价跌量增,连续第二天,是明眼人最后一次出货的机会。

  (5) 内外盘都较小,股价轻微上涨,是庄家锁定筹码。轻轻地托着股价上走的时候。

  (6) 外盘大于内盘,股价仍上升,看高一线。

  (7) 内盘大于外盘,股价不跌或反有微升,可能有庄家进场。

Replication Configuration Between MSSQL 2005 and Oracle



1. Oracle as publisher

1) Configuring an Oracle Publisher
Replication modes are: snapshot and transactional replication.
Preparation steps before creating a publication from an Oracle:

1.1 Create a replication administrative user within the Oracle database using the supplied script.
Connect to the Oracle database using an account with DBA privileges and execute the script. This script prompts for the user and password for the replication administrative user schema as well as the default tablespace in which to create the objects (the tablespace must already exist in the Oracle database).
It is recommended that the schema be used only for objects required by replication; do not create tables to be published in this schema.
If you create user schema manually, you need grant schema the following permissions to user or via role:

CREATE PUBLIC SYNONYM and DROP PUBLIC SYNONYM
CREATE PROCEDURE
CREATE SEQUENCE
CREATE SESSION

and you must also grant the following permissions to the user directly (not through a role):
CREATE ANY TRIGGER. This is required only for transactional replication; snapshot replication does not use triggers.

CREATE TABLE
CREATE VIEW


1.2 For the tables that you will publish, grant SELECT permission directly on each of them (not through a role) to the Oracle administrative user you created.

For example, login to Oracle as user "SCOTT":

c:>sqlplus scott/tiger@ora101
SQL> grant select on DEPT to repluser;
Grant succeeded.
SQL> grant select on EMP to repluser;
Grant succeeded.


1.3 Install the Oracle client software and OLE DB provider on the Microsoft SQL Server Distributor, and then restart the server.
You can use Oracle Universal Installer and Network Configuration Assistant to configure Oracle database network connectivity.
The account under which the SQL Server service on the Distributor runs must be granted read and execute permissions for the directory (and all subdirectories) in which the Oracle client networking software is installed.
To test the connectivity, run "sqlplus /@" in command line, and you will see a SQL prompt.


1.4 Configure the Oracle database as a Publisher at the SQL Server Distributor.
Oracle Publishers always use a remote Distributor; you must configure an instance of SQL Server to act as a Distributor for your Oracle Publisher (an Oracle Publisher can only use one Distributor, but a single Distributor can service more than one Oracle Publisher). After a Distributor is configured, identify the Oracle database instance as a Publisher at the SQL Server Distributor through SQL Server Management Studio, Transact-SQL, or Replication Management Objects (RMO).
When you identify the Oracle database as a Publisher, you must choose an Oracle publishing option: Complete or Oracle Gateway. After a Publisher is identified, this option cannot be changed without dropping and reconfiguring the Publisher. The Complete option is designed to provide snapshot and transactional publications with the complete set of supported features for Oracle publishing. The Oracle Gateway option provides specific design optimizations to improve performance for cases where replication serves as a gateway between systems.
After the Oracle Publisher is identified at the SQL Server Distributor, replication creates a linked server with the same name as the TNS service name of the Oracle database. This linked server can be used only by replication. If you need to connect to the Oracle Publisher over a linked server connection, create another TNS service name, and then use this name when calling sp_addlinkedserver (Transact-SQL).


a. Script to grant Oracle permissions
This script is also available in the following directory after installation: :\\Program Files\Microsoft SQL Server\\MSSQL\Install\oracleadmin.sql.

-- PL/SQL script to create a database user with the required
-- permissions to administer SQL Server publishing for an Oracle
-- database.
--
-- &&ReplLogin == Replication user login
-- &&ReplPassword == Replication user password
-- &&DefaultTablespace == Tablespace that will serve as the default
-- tablespace for the replication user.


b. Managing Oracle tablespace

To specify a tablespace for an article logging table, you can specify a tablespace in the Article Properties dialog box or by using sp_changearticle (Transact-SQL).

http://download.microsoft.com/download/4/7/a/47a548b9-249e-484c-abd7-29f31282b04d/Repl_Quickstart_for_Oracle.doc


2) Design considerations and limitations for Oracle publisher
2.1 The Oracle Gateway option provides improved performance over the Oracle Complete option; however, this option cannot be used to publish the same table in multiple transactional publications. A table can appear in at most one transactional publication and any number of snapshot publications. If you need to publish the same table in multiple transactional publications, choose the Oracle Complete option.

2.2 Replication supports publishing tables, indexes, and materialized views. Other objects are not replicated.

2.3 There are some small differences between the storage and processing of data in Oracle and SQL Server databases that affect replication.
a. Oracle has different maximum size limits for some objects. Any objects created in the Oracle publication database should adhere to the maximum size limits for the corresponding objects in SQL Server.
b. By default Oracle object names are created in upper case. Ensure that you supply the names of Oracle objects in upper case when publishing them through a SQL Server Distributor if they are upper case on the Oracle database.
c. Oracle has a slightly different SQL dialect from SQL Server; row filters should be written in Oracle-compliant syntax.
d. Oracle triggers fire when rows containing LOBs are inserted or deleted; however updates to LOB columns do not fire triggers. An update to a LOB column will be replicated immediately only if a non-LOB column of the same row is also updated in the same Oracle transaction.
e. For both snapshot and transactional replication, columns contained in unique indexes and constraints (including primary key constraints) must adhere to certain restrictions:

I. The maximum number of columns allowed in an index on SQL Server is 16.
II. All columns included in unique constraints must have supported data types.
III. All columns included in unique constraints must be published (they cannot be filtered).
IV. Columns involved in unique constraints or indexes should not be null.

f. Primary key to foreign key relationships in the Oracle database are not replicated to Subscribers.


2.4 There are a number of differences in how transactional replication features are supported when using an Oracle Publisher.
a. Subscribers to Oracle publications cannot use immediate updating or queued updating subscriptions, or be nodes in a peer-to-peer or bidirectional topology.
b. Subscribers to Oracle publications cannot be automatically initialized from a backup.
c. SQL Server supports two types of validation: binary and rowcount. Oracle Publishers support rowcount validation.
d. SQL Server offers two snapshot formats: native bcp-mode and character-mode. Oracle Publishers support character mode snapshots.
e. Schema changes to published Oracle tables are not supported. To make schema changes, first drop the publication, make the changes, and then re-create the publication and any subscriptions.


3) Administrative considerations for Oracle publisher
3.1 Importing and loading data
Triggers are used in change tracking for transactional publications on Oracle. Changes to published tables can be replicated to Subscribers only if the replication triggers fire when an update, insert, or delete occurs. The Oracle utilities Oracle Import and SQL*Loader both have options that affect whether triggers will fire when rows are inserted into replicated tables with these utilities.
While using Oracle Import, if option "ignore" is set to 'n', the table is dropped and re-created during import. This removes replication triggers and disables replication.
With SQL*Loader, if option "direct" is set to 'false', rows are inserted using conventional INSERT statements, which fire replication triggers. If direct is set to 'true', the load is optimized, and triggers are not fired.

3.2 Making changes to published objects
The following action requires you to stop all activity on the published tables: Moving a published table.
The following actions require you to drop the publication, perform the operation, and then recreate the publication:

a. Truncating a published table.
b. Renaming a published table.
c. Adding a column to a published table.
d. Dropping or modifying a column that is published for replication.
e. Performing non-logged operations.

3.3 You must drop and reconfigure the Publisher if you drop or modify any Publisher level tracking tables, triggers, sequences, or stored procedures.


4) Performance tuning for Oracle publisher
4.1 The Oracle Gateway option provides improved performance over the Oracle Complete option; however, this option cannot be used to publish the same table in multiple transactional publications. A table can appear in at most one transactional publication and any number of snapshot publications. If you need to publish the same table in multiple transactional publications, choose the Oracle Complete option.

4.2 Changes to published Oracle tables are processed in groups called transaction sets. To ensure transactional consistency, each transaction set is committed as a single transaction at the distribution database. If the transaction set becomes too large, it cannot be processed efficiently as a single transaction. By default, transaction sets are created only by the Log Reader Agent. Transaction sets can be created with the Xactset job (an Oracle database job installed by replication), which uses the same mechanism that the Log Reader Agent does to create sets. To prevent the transaction set from becoming too large, ensure that transaction sets are created at regular intervals, even if the Log Reader Agent does not run or cannot connect to the Oracle Publisher.


5) Data type mapping for Oracle publisher


6) Backup and Restore for Oracle publisher
6.1 Ensure the Log Reader Agent does not run and that other database activity on the published tables does not occur while the Publisher is being backed up.

6.2 Backup up the Publisher and Distributor at the same time.

6.3 If the Publisher or Distributor must be restored, reinitialize all subscriptions.

6.4 To restore a Subscriber from a backup (without having to reinitialize subscriptions), the transactions delivered to the distribution database after the last subscription database backup was completed must still be available.

6.5 If the Publisher or Distributor becomes out of sync as the result of a database restore, the replication agents log error messages. At this point, you must drop and recreate all relevant publications and subscriptions.

6.6 If the Publisher must be dropped and reconfigured, drop the MSSQLSERVERDISTRIBUTOR public synonym and the configured Oracle replication user with the CASCADE option to remove all replication objects from the Oracle Publisher.


7) Objects created on Oracle publisher


2. Oracle as subscriber
1) General Considerations for Non-SQL Server Subscribers
1.1 Replication supports publishing tables and indexed views as tables to non-SQL Server Subscribers.
1.2 If a publication will have SQL Server Subscribers and non-SQL Server Subscribers, the publication must be enabled for non-SQL Server Subscribers before any subscriptions to SQL Server Subscribers are created.
1.3 The account under which the Distribution Agent runs must have read access to the install directory of the OLE DB provider.
1.4 By default, scripts generated by the Snapshot Agent for non-SQL Server Subscribers use non-quoted identifiers in the CREATE TABLE syntax.
1.5 If the SQL Server Distributor is running on a 64 bit platform, you must use the 64 bit version of the appropriate OLE DB provider.
1.6 Replication moves data in Unicode format regardless of the collation/code pages used on the Publisher and Subscriber. It is recommended that you choose a compatible collation/code page when replicating between Publishers and Subscribers.
1.7 If an article is added to or deleted from a publication, subscriptions to non-SQL Server Subscribers must be reinitialized.
1.8 The only constraints supported for all non-SQL Server Subscribers are: NULL, and NOT NULL. Primary key constraints are replicated as unique indexes.
1.9 Published schema and data must conform to the requirements of the database at the Subscriber.
1.10 Tables replicated to non-SQL Server Subscribers will adopt the table naming conventions of the database at the Subscriber.
1.11 SQL Server offers two types of subscriptions: push and pull. Non-SQL Server Subscribers must use push subscriptions, in which the Distribution Agent runs at the SQL Server Distributor.
1.12 SQL Server offers two snapshot formats: native bcp-mode and character-mode. Non-SQL Server Subscribers require character mode snapshots.
1.13 Non-SQL Server Subscribers cannot use immediate updating or queued updating subscriptions, or be nodes in a peer-to-peer topology.
1.14 Non-SQL Server Subscribers cannot be automatically initialized from a backup.


2) Configuring an Oracle Subscriber
2.1 Install and configure Oracle client networking software and the Oracle OLE DB provider on the SQL Server Distributor.
2.2 Create a TNS name for the Subscriber.
2.3 Create a snapshot or transactional publication, enable it for non-SQL Server Subscribers, and then create a push subscription for the Subscriber.
2.4 The account under which the SQL Server service on the Distributor runs must be granted read and execute permissions for the directory (and all subdirectories) where the Oracle client networking software is installed.


3) Considerations for Oracle Subscribers
3.1 Oracle treats both empty strings and NULL values as NULL. This is important if you define a SQL Server column as NOT NULL, and are replicating the column to an Oracle Subscriber. To avoid failures when applying changes to the Oracle Subscriber, you must do one of the following:
a. Ensure that empty strings are not inserted into the published table as column values.
b. Use the –SkipErrors parameter for the Distribution Agent if it is acceptable to be notified of failures in the Distribution Agent history log and to continue processing.
c. Modify the generated create table script, removing the NOT NULL attribute from any character columns that may have associated empty strings, and supply the modified script as a custom create script for the article using the @creation_script parameter of sp_addarticle (Transact-SQL).


VMs vs. Multiple SQL Server Instances


1. Performance
VM can not provide the same level of performance as the physical machine, most estimated VM overhead is at 10% - 15%.

2. Software Patching
VM will add more OS & software patching and administrative effort.

3. Backup/Recovery & Replication
VM has the advantage on backup/recovery and replication to another machine.

4. Licensing
Multiple SQL server instances don’t require any additional SQL Server licenses. For virtualization, each SQL Server system that’s running in a guest VM must be licensed—unless you’re running SQL Server 2005 Enterprise Edition, which provides unlimited VM usage.

如何从MACD指标研判股市走势?

MACD指标是市场上大多数投资者都熟知的分析工具,但在具体运用时,投资者可能会觉得MACD指标运用的准确性、实效性、可操作性上有很多令人茫然的地方,有时会发现用从书上学来的MACD指标的分析方法和技巧去研判股票走势,所得出的结论往往和实际走势存在着很大的差异,甚至会得出相反的结果。其中的主要原因是市场上绝大多数论述股市技术分析的书中关于MACD的论述只局限在表面的层次,只介绍MACD的一般分析原理和方法,而对MACD分析指标的一些特定的内涵和分析技巧的介绍鲜有涉及。本文将在介绍MACD指标的一般研判技巧和分析方法基础上,详细阐述MACD的特殊研判原理和功能。


MACD指标的一般研判标准主要是围绕快速和慢速两条均线及红、绿柱线状况和它们的形态展开,一般的分析方法主要包括DIF和MACD值及它们所处的位置、DIF和MACD的交叉情况、红柱状的收缩情况和MACD图形的形态这四个大的方面分析。

DIF和MACD的值及线的位置。1、当DIF和MACD均大于0(即在图形上表示为它们处于零线以上)并向上
移动时,一般表明股市处于多头行情中,可以买入或持股。2、当DIF和MACD均小于0(即在图形上表示为它们处于零线以下)并向下移动时,一般表明股市处于空头行情中,可以卖出股票或 观望。3、当DIF和MACD均大于0(即在图形上表示为它们处于零线以上)但都向下移动时,一般表明股票行情处于退潮阶段,股票将下跌,可以卖出股票和 观望。4、当DIF和MACD均小于0时(即在图形上表示为它们处于零线以下)但向上移动时,一般表明行情即将启动,股票将上涨,可以买进股票或持股待 涨。

DIF和MACD的交叉情况。1、当DIF与MACD都在零线以上而DIF向上突破MACD时,表明股市处于强势之中,股价将再次上涨,可以加码买进股票 或持股待涨,这就是MACD指标“黄金交叉”的一种形式。2、当DIF和MACD都在零线以下,而DIF向上突破MACD时,表明股市走势即将转强,股价 跌势已尽,将止跌朝上,可以开始买进股票或持股,这是MACD指标“黄金交叉”的另一种形式。3、当DIF与MACD都在零线以上而DIF却向下突破 MACD时,表明股市走势即将由强势转为弱势,股价将大跌,这时应卖出大部分股票而不能买股票,这就是MACD指标的“死亡交叉”的一种形式。4、当 DIF和MACD都在零线以下而DIF向下突破MACD时,表明股市将再次进入极度弱势中,股价还将下跌,可以卖出股票或观望,这是MACD指标“死亡交 叉”的另一种形式。


Tuesday, October 23, 2007

Using LogMiner to analyze transaction history


1. LogMiner Configuration
There are four basic objects in a LogMiner configuration that you should be familiar with: the source database, the mining database, the LogMiner dictionary, and the redo log files containing the data of interest.

# The source database is the database that produces all the redo log files that you want LogMiner to analyze.
# The mining database is the database that LogMiner uses when it performs the analysis.
# The LogMiner dictionary allows LogMiner to provide table and column names, instead of internal object IDs, when it presents the redo log data that you request.
# The redo log files contain the changes made to the database or database dictionary.

2. Steps in a Typical LogMiner Session
1) Enable Supplemental Logging
Database level:
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA; -- enable minimal database level supplemental logging
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS; -- Database level : ALL system-generated uncondititional supplemental log group
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS; -- PRIMARY KEY system-generated uncondititional supplemental log group
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS; -- UNIQUE system-generated conditional supplemental log group
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS; -- FOREIGN KEY system-generated conditional supplemental log group

To disable all database supplemental logging, you must first disable any identification key logging that has been enabled, then disable minimal supplemental logging. The following example shows the correct order:

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;
SQL> ALTER DATABASE DROP SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
SQL> ALTER DATABASE DROP SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;
SQL> ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;

Table level:
SQL> ALTER TABLE HR.EMPLOYEES ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
SQL> ALTER TABLE HR.EMPLOYEES ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
SQL> ALTER TABLE HR.EMPLOYEES ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;
SQL> ALTER TABLE HR.EMPLOYEES ADD SUPPLEMENTAL LOG GROUP emp_parttime (EMPLOYEE_ID, LAST_NAME, DEPARTMENT_ID) ALWAYS; -- User-defined unconditional log groups
SQL> ALTER TABLE HR.EMPLOYEES ADD SUPPLEMENTAL LOG GROUP emp_fulltime (EMPLOYEE_ID, LAST_NAME, DEPARTMENT_ID); -- User-defined conditional supplemental log groups

2) Extract a LogMiner Dictionary (unless you plan to use the online catalog)
* Specify use of the online catalog by using the DICT_FROM_ONLINE_CATALOG option when you start LogMiner.
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);

* Extract database dictionary information to the redo log files.
SQL> EXECUTE DBMS_LOGMNR_D.BUILD(OPTIONS=> DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);

* Extract database dictionary information to a flat file.
"UTL_FILE_DIR = /oracle/database" must be put in initial parameter file first to enable directory access.
SQL> EXECUTE DBMS_LOGMNR_D.BUILD('dictionary.ora', '/oracle/database/', DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);
SQL> EXECUTE DBMS_LOGMNR_D.BUILD('dictionary.ora', '/oracle/database/');

3) Specify Redo Log Files for Analysis
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/logs/log1.f', OPTIONS => DBMS_LOGMNR.NEW);
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/logs/log2.f', OPTIONS => DBMS_LOGMNR.ADDFILE);
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME=>'/oracle/logs/log2.f');
SQL> EXECUTE DBMS_LOGMNR.REMOVE_LOGFILE(LOGFILENAME => '/oracle/logs/log2.f');

To use online catalog and DBMS_LOGMNR.CONTINUOUS_MINE option in START_LOGMNR, it is not necessary to specify redo log files manually, it is automatically.


4) Start LogMiner
The OPTIONS parameter to DBMS_LOGMNR.START_LOGMNR:
* DICT_FROM_ONLINE_CATALOG
* DICT_FROM_REDO_LOGS
* CONTINUOUS_MINE
* COMMITTED_DATA_ONLY
* SKIP_CORRUPTION
* NO_SQL_DELIMITER
* PRINT_PRETTY_SQL
* NO_ROWID_IN_STMT
* DDL_DICT_TRACKING

SQL> ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS';
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR( -
STARTTIME => '01-Jan-2003 08:30:00', -
ENDTIME => '01-Jan-2003 08:45:00', -
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + -
DBMS_LOGMNR.CONTINUOUS_MINE);
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.COMMITTED_DATA_ONLY);
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(options => DBMS_LOGMNR.SKIP_CORRUPTION);
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(STARTSCN => 621047, ENDSCN => 625695, OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.CONTINUOUS_MINE);
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(DICTFILENAME =>'/oracle/database/dictionary.ora');
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.COMMITTED_DATA_ONLY);

5) Query V$LOGMNR_CONTENTS
* LogMiner populates the view only in response to a query against it. You must successfully start LogMiner before you can query V$LOGMNR_CONTENTS.
* When a SQL select operation is executed against the V$LOGMNR_CONTENTS view, the redo log files are read sequentially.
* Every time you query V$LOGMNR_CONTENTS, LogMiner analyzes the redo log files for the data you request.
* The amount of memory consumed by the query is not dependent on the number of rows that must be returned to satisfy a query.
* The time it takes to return the requested data is dependent on the amount and type of redo log data that must be mined to find that data.

SQL> SELECT OPERATION, SQL_REDO, SQL_UNDO
SQL> FROM V$LOGMNR_CONTENTS
SQL> WHERE SEG_OWNER = 'OE' AND SEG_NAME = 'ORDERS' AND
SQL> OPERATION = 'DELETE' AND USERNAME = 'RON';

SQL> SELECT SQL_REDO FROM V$LOGMNR_CONTENTS
SQL> WHERE SEG_NAME = 'EMPLOYEES' AND
SQL> SEG_OWNER = 'HR' AND
SQL> OPERATION = 'UPDATE' AND
SQL> DBMS_LOGMNR.MINE_VALUE(REDO_VALUE, 'HR.EMPLOYEES.SALARY') >
SQL> 2*DBMS_LOGMNR.MINE_VALUE(UNDO_VALUE, 'HR.EMPLOYEES.SALARY');


6) End the LogMiner Session
SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR;


3. Accessing LogMiner Operational Information in Views
* V$LOGMNR_DICTIONARY
Shows information about a LogMiner dictionary file that was created using the STORE_IN_FLAT_FILE option to DBMS_LOGMNR.START_LOGMNR.
* V$LOGMNR_LOGS
Shows information about specified redo log files.
* V$LOGMNR_PARAMETERS
Shows information about optional LogMiner parameters, including starting and ending system change numbers (SCNs) and starting and ending times.
* V$DATABASE, DBA_LOG_GROUPS, ALL_LOG_GROUPS, USER_LOG_GROUPS, DBA_LOG_GROUP_COLUMNS, ALL_LOG_GROUP_COLUMNS, USER_LOG_GROUP_COLUMNS
Shows information about the current settings for supplemental logging.


4. Examples
* Examples of Mining by Explicitly Specifying the Redo Log Files of Interest
* Example 1: Finding All Modifications in the Last Archived Redo Log File
SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/usr/oracle/data/db1arch_1_16_482701534.dbf', OPTIONS => DBMS_LOGMNR.NEW);
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
SQL> SELECT username AS USR, (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID, SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS WHERE username IN ('HR', 'OE');
SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();

* Example 2: Grouping DML Statements into Committed Transactions
SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/usr/oracle/data/db1arch_1_16_482701534.dbf', OPTIONS => DBMS_LOGMNR.NEW);
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.COMMITTED_DATA_ONLY);
SQL> SELECT username AS USR, (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID, SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS WHERE username IN ('HR', 'OE');
SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();

* Example 3: Formatting the Reconstructed SQL
SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/usr/oracle/data/db1arch_1_16_482701534.dbf', OPTIONS => DBMS_LOGMNR.NEW);
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL);
SQL> SELECT username AS USR, (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID, SQL_REDO FROM V$LOGMNR_CONTENTS;
SQL> SELECT username AS USR, (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID, SQL_UNDO FROM V$LOGMNR_CONTENTS;
SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();

* Example 4: Using the LogMiner Dictionary in the Redo Log Files
SQL> SELECT NAME, SEQUENCE# FROM V$ARCHIVED_LOG WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);

Find a redo log file that contains the end of the dictionary extract.
SQL> SELECT NAME, SEQUENCE#, DICTIONARY_BEGIN d_beg, DICTIONARY_END d_end FROM V$ARCHIVED_LOG WHERE SEQUENCE# = (SELECT MAX (SEQUENCE#) FROM V$ARCHIVED_LOG WHERE DICTIONARY_END = 'YES' and SEQUENCE# <= 210); Find the redo log file that contains the start of the data dictionary extract that matches the end of the dictionary found in the previous step. SQL> SELECT NAME, SEQUENCE#, DICTIONARY_BEGIN d_beg, DICTIONARY_END d_end FROM V$ARCHIVED_LOG WHERE SEQUENCE# = (SELECT MAX (SEQUENCE#) FROM V$ARCHIVED_LOG WHERE DICTIONARY_BEGIN = 'YES' and SEQUENCE# <= 208); Specify the list of the redo log files of interest. SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/usr/oracle/data/db1arch_1_210_482701534.dbf', OPTIONS => DBMS_LOGMNR.NEW);
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/usr/oracle/data/db1arch_1_208_482701534.dbf');
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/usr/oracle/data/db1arch_1_207_482701534.dbf');
Query the V$LOGMNR_LOGS view to display the list of redo log files to be analyzed, including their timestamps.
SQL> SELECT FILENAME AS name, LOW_TIME, HIGH_TIME FROM V$LOGMNR_LOGS;

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL);
SQL> SELECT USERNAME AS usr, SQL_REDO FROM V$LOGMNR_CONTENTS WHERE SEG_OWNER IS NULL OR SEG_OWNER NOT IN ('SYS', 'SYSTEM') AND TIMESTAMP > '10-jan-2003 15:59:53';
SQL> SELECT SQL_REDO FROM V$LOGMNR_CONTENTS WHERE XIDUSN = 1 and XIDSLT = 2 and XIDSQN = 1594;
SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();

* Example 5: Tracking DDL Statements in the Internal Dictionary
SQL> SELECT NAME, SEQUENCE# FROM V$ARCHIVED_LOG WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);
SQL> SELECT NAME, SEQUENCE#, DICTIONARY_BEGIN d_beg, DICTIONARY_END d_end FROM V$ARCHIVED_LOG WHERE SEQUENCE# = (SELECT MAX (SEQUENCE#) FROM V$ARCHIVED_LOG WHERE DICTIONARY_END = 'YES' and SEQUENCE# <> SELECT NAME, SEQUENCE#, DICTIONARY_BEGIN d_beg, DICTIONARY_END d_end FROM V$ARCHIVED_LOG WHERE SEQUENCE# = (SELECT MAX (SEQUENCE#) FROM V$ARCHIVED_LOG WHERE DICTIONARY_BEGIN = 'YES' and SEQUENCE# <= 208); Make sure you have a complete list of redo log files. SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE SEQUENCE# >= 207 AND SEQUENCE# <= 210 ORDER BY SEQUENCE# ASC; SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/usr/oracle/data/db1arch_1_210_482701534.dbf', OPTIONS => DBMS_LOGMNR.NEW);
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/usr/oracle/data/db1arch_1_209_482701534.dbf');
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/usr/oracle/data/db1arch_1_208_482701534.dbf');
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/usr/oracle/data/db1arch_1_207_482701534.dbf');
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS + DBMS_LOGMNR.DDL_DICT_TRACKING + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL);
SQL> SELECT USERNAME AS usr,(XIDUSN || '.' || XIDSLT || '.' || XIDSQN) as XID, SQL_REDO FROM V$LOGMNR_CONTENTS WHERE SEG_OWNER IS NULL OR SEG_OWNER NOT IN ('SYS', 'SYSTEM') AND TIMESTAMP > '10-jan-2003 15:59:53';
SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();

* Example 6: Filtering Output by Time Range
Create a list of redo log files to mine.
--
-- my_add_logfiles
-- Add all archived logs generated after a specified start_time.
--
CREATE OR REPLACE PROCEDURE my_add_logfiles (in_start_time IN DATE) AS
CURSOR c_log IS
SELECT NAME FROM V$ARCHIVED_LOG
WHERE FIRST_TIME >= in_start_time;

count pls_integer := 0;
my_option pls_integer := DBMS_LOGMNR.NEW;

BEGIN
FOR c_log_rec IN c_log
LOOP
DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => c_log_rec.name,
OPTIONS => my_option);
my_option := DBMS_LOGMNR.ADDFILE;
DBMS_OUTPUT.PUT_LINE('Added logfile ' || c_log_rec.name);
END LOOP;
END;
/

SQL> EXECUTE my_add_logfiles(in_start_time => '13-jan-2003 14:00:00');

Query the V$LOGMNR_LOGS to see the list of redo log files.
SQL> SELECT FILENAME name, LOW_TIME start_time, FILESIZE bytes FROM V$LOGMNR_LOGS;
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(STARTTIME => '13-jan-2003 15:00:00', ENDTIME => '13-jan-2003 16:00:00', OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL);
SQL> SELECT TIMESTAMP, (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID, SQL_REDO FROM V$LOGMNR_CONTENTS WHERE SEG_OWNER = 'OE';
SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();


* Examples of Mining Without Specifying the List of Redo Log Files Explicitly
* Example 1: Mining Redo Log Files in a Given Time Range
SQL> SELECT NAME, FIRST_TIME FROM V$ARCHIVED_LOG WHERE SEQUENCE# = (SELECT MAX(SEQUENCE#) FROM V$ARCHIVED_LOG WHERE DICTIONARY_BEGIN = 'YES');
SQL> SELECT FILENAME name FROM V$LOGMNR_LOGS WHERE LOW_TIME > '10-jan-2003 12:01:34';
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(STARTTIME => '10-jan-2003 12:01:34', ENDTIME => SYSDATE, OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL + DBMS_LOGMNR.CONTINUOUS_MINE);
SQL> SELECT FILENAME name FROM V$LOGMNR_LOGS;
SQL> SELECT USERNAME AS usr,(XIDUSN || '.' || XIDSLT || '.' || XIDSQN) as XID, SQL_REDO FROM V$LOGMNR_CONTENTS WHERE SEG_OWNER IS NULL OR SEG_OWNER NOT IN ('SYS', 'SYSTEM') AND TIMESTAMP > '10-jan-2003 15:59:53';
SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();

* Example 2: Mining the Redo Log Files in a Given SCN Range
SQL> SELECT CHECKPOINT_CHANGE#, CURRENT_SCN FROM V$DATABASE;
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(STARTSCN => 56453576, ENDSCN => 56454208, OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL + DBMS_LOGMNR.CONTINUOUS_MINE);
SQL> SELECT FILENAME name, LOW_SCN, NEXT_SCN FROM V$LOGMNR_LOGS;
SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE SEQUENCE# = (SELECT MAX(SEQUENCE#) FROM V$ARCHIVED_LOG);
SQL> SELECT SCN, (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) as XID, SQL_REDO FROM V$LOGMNR_CONTENTS WHERE SEG_OWNER NOT IN ('SYS', 'SYSTEM');
SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();

* Example 3: Using Continuous Mining to Include Future Values in a Query
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(STARTTIME => SYSDATE, ENDTIME => SYSDATE + 5/24, OPTIONS => DBMS_LOGMNR.CONTINUOUS_MINE + DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
SQL> SET ARRAYSIZE 1;
SQL> SELECT USERNAME AS usr, SQL_REDO FROM V$LOGMNR_CONTENTS WHERE SEG_OWNER = 'HR' AND TABLE_NAME = 'EMPLOYEES';
SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();


* Scenario 1: Using LogMiner to Track Changes Made by a Specific User
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => 'log1orc1.ora', OPTIONS => DBMS_LOGMNR.NEW);
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => 'log2orc1.ora', OPTIONS => DBMS_LOGMNR.ADDFILE);
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(DICTFILENAME => 'orcldict.ora', STARTTIME => TO_DATE('01-Jan-1998 08:30:00','DD-MON-YYYY HH:MI:SS'), ENDTIME => TO_DATE('01-Jan-1998 08:45:00', 'DD-MON-YYYY HH:MI:SS'));
SQL> SELECT SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS WHERE USERNAME = 'joedevo' AND SEG_NAME = 'salary';
SQL> DBMS_LOGMNR.END_LOGMNR( );

* Scenario 2: Using LogMiner to Calculate Table Access Statistics
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(STARTTIME => TO_DATE('07-Jan-2003 08:30:00','DD-MON-YYYY HH:MI:SS'), ENDTIME => TO_DATE('21-Jan-2003 08:45:00','DD-MON-YYYY HH:MI:SS'), DICTFILENAME => '/usr/local/dict.ora');
SQL> SELECT SEG_OWNER, SEG_NAME, COUNT(*) AS Hits FROM V$LOGMNR_CONTENTS WHERE SEG_NAME NOT LIKE '%$' GROUP BY SEG_OWNER, SEG_NAME ORDER BY Hits DESC;
SQL> DBMS_LOGMNR.END_LOGMNR( );

Sunday, June 10, 2007

Steps to set up oracle database RAC


  1. Verify hardware requirement
    • RAM - min 512MB
    • HD - 4GB
    • Swap - 1GB
    • Temp - 400MB
    • min 2 NIC
    • TCP/IP for public NIC, UDP for interconnect NICs
  2. Verify system requirement
    • User groups: oinstall, dba, nobody
    • User oracle's main group - oinstall, second group - dba
    • Environment variables: ORACLE_BASE, TMP, TMPDIR
    • Customized Kernel and open file limits
    • Storage - OCFS, Raw device or ASM
  3. Install CRS (Cluster Ready Service)
    • Private interconnect interfaces
    • Oracle cluster registry
    • Voting disk
    • Run root.sh for all nodes
  4. Install RAC database instance
    • Install Oracle database software
    • Use VIPCA to configure virtual IPs
  5. Create RAC database
    • Perform preinstall database tasks - CRS running (evmd.bin, ccssd.bin, crsd.bin), Group Services running, set environment variables
    • Create a cluster database: additional step to select installed node




  1. OCA references

  2. OCP References

  3. 1Z0-042 sample questions (1)

  4. Answer - "1Z0-042 sample questions"

  5. 1Z0-043 sample questions (1)

  6. Answer - "1Z0-043 sample questions"

  7. How to query server and client environment in PL/SQL


=>

Saturday, June 9, 2007

Linux User Guide (2)

Managing users:

/etc/passwd stores all user account information, its format is:

username:encrypted password:UID:GID:fullname:home directory:login shell

/etc/shadow stores shadow password.

1. Adding users

· Add an entry to /etc/passwd file;

· Create home directory;

· Set permission to let user own the directory;

· Create shell initialization file in the home directory;

Configure other system-wide file.
$useradd command takes /etc/adduser.conf as default configuration.

2. Deleting users
$userdel
You can temporary disable an account by adding * to the password field in the /etc/passwd.

3. Setting user attributes
$passwd username : change user’s password, only root can change other’s password.
$chfn : change user’s full name.
$chsh: change user’s login shell.

4. Groups
/etc/group stores group and its members information, its format is:
group name:password:GID:other members
In the /etc/passwd, each user was given a default GID, however user may belong to more than one group, his account can be added to /etc/group to participate in that group.
$groups à lists all groups you belong to
$groupadd à add a group
User deletes entry in the /etc/group to delete a group from system.






=>

Tuesday, June 5, 2007

Catalog of design patterns

  • Scope in class level
  1. Creational: Factory Method (121)
  2. Structural: Adapter (157)
  3. Behavioral: Interpreter (274), Template Method (360)
  • Scope in object level
  1. Creational: Abstract Factory (99), Builder (110), Prototype (133), Singleton (144)
  2. Structural: Adapter (157), Bridge (171), Composite (183), Decorator (196), Facade (208), Flyweight (218), Proxy (233)
  3. Behavioral: Chain of Responsibility (251), Command (263), Iterator (289), Mediator (305), Memento (316), Observer (326), State (338), Strategy (349), Visitor (366)






  1. J2EE Patterns

  2. SCJP 310-035 Sample Questions (1)

  3. Answers of "SCJP 310-035 Sample Questions (1)"

  4. SCWCD 310-081 Sample Questions


=>

Monday, June 4, 2007

Configure MSSQL to use more than 2GB RAM

  1. Add /pae in boot.ini
  2. Configure these in MS SQL 2000 (recommeded):
  • awe enable 1
  • max server memory (MB) 30000
  • min server memory (MB) 30000
  • set working set size 1
  • service account has "lock pages in memory" rights
  • memory settings are not dynamic when AWE is involved, stop and start SQL server is required for new setting takes effect.






  1. MSSQL 2005 Knowledge Base 1

  2. MCTS 07-431 Sample Questions (1)


=>

Sunday, June 3, 2007

Dollar Cost Averaging

The idea is simple: spend a fixed dollar amount at regular intervals (e.g., monthly) on a particular investment or portfolio/part of a portfolio, regardless of the share price. Since in general the market goes up, this is widely considered a safe investing strategy over the long term. In fact while Dollar Cost Averaging is a relatively safe investment strategy over the long term, however it also is not the safest investment strategy or the most reliable strategy or the most effective in general.

Each strategy wins at least some of the time, but after a few runs you'll see that DCA is the statistical "dog", losing about two times out of three.


Of course, dollar cost averaging will win if your start date falls right before a dramatic crash (like October 1987) or at the start of an overall 12 month slump (like most of 2000). But unless you can predict these downturns ahead of time, you have no scientific reason to believe that dollar cost averaging will give you an advantage.

So time does matter, start at big jump and continue until you get 30% profit and cut invest in half, sell at 50%, wait for another jump.





  1. All Money Sense

  2. 4 ways to be a millionaire

  3. Google's market share jumped in April

  4. RBS group raises the stakes in ABN Amro bidding

  5. Yahoo CTO is leaving

  6. PSP, XBox, Zen Skins


=>

Linux User Guide (1)

File links:

Hard link: $ln source dest

By using hard link, all links and original file have the same inode.

Using ls command to list link information:

$ls –i test

$ls –l test : second column shows the number of links

Symbolic links: $ln –s source dest

For symbolic link, all links have different inode, user can create a symbolic link to a file that doesn’t exist.

Job Control:

Commands: ps, ps –aux, fg, bg, CTRL-Z(suspend job), jobs, kill %i (after % is job number)

Using /dev/null to redirect useless output data.

Stop and restart jobs:

CTRL-Z: suspend job;

$fg: restart job in the foreground;

$bg: restart job in the background;

$&: start job as background process.

Booting the system:

1. Make boot device manually

$rdev kernel-name root-device à to make boot partition

$rdev /vmlinuz /dev/hda2

$cp /vmlinuz /dev/fd0

2. Using LILO

Win95 installer will overwrite LILO MBR, before install win95, create boot disk for linux, install win95, then reboot with linux floppy, run /sbin/lilo, reconfigure the system.

Using $sys a: c: /mbr to overwrite LILO.

Shutting down:

$shutdown hh:mm:ss(time) warning-message

$shutdown now

$shutdown –r 20:00 we are shutting down -r means reboot the system after shut down.

$halt

The /etc/inittab file:

System boot, kernel mount the root file system, executes init that takes /etc/inittab as the parameter file, spawns a lot of children processes.

Managing the file system:

1. mounting the file system
$ mount –av

2. /etc/fstab file
File systems support by linux: ext2, ext, minix, xia, umsdos, msdos, proc, iso9660, xenix, sysV, coherent, hpfs
Linux uses ext2 as natural file system, swap partition has a mount point of none.
$ swapon –a
$ mount –t ext2 /dev/hda3 /usr

3. Device Diver name:
hdx: IDE driver sdx: SCSI driver
fdx: floppy driver stx: SCSI tape driver
scdx: SCSI CD-ROM driver

/dev/hda: the whole first IDE driver
/dev/hda1: the first partition in the driver
/dev/hda2: the second partition in the driver

/dev/hda: IDE, master, primary
/dev/hdb: IDE, slave, primary
/dev/hdc: IDE, master, secondary
/dev/hdd: IDE, slave, secondary

Checking the file sytem:
$e2fsck –av /dev/hda2

$efsck $xfsck $fsck
If e2fsck reports it performed repairs on a mounted file system, you must reboot the system immediately.

Using a swap file:

Create a swap file: $dd if=/dev/zero of=/swap bs=1024 count=8208 : create one 8M swap file

$mkswap /swap 8208

$sync

$swapon /swap

Drawback using a swap file:

1. Performance will not be as good as swap partition whose blocks are contiguous, I/O requests are made directly to the device;

2. System has more chance to corrupt for large swap file.

Good point:

1. User can use swap file for the actions that need more swap space, and after that delete it.


Saturday, June 2, 2007

Friday, June 1, 2007

Sun Certified Web Component Developer 310-081 Sample Questions

1. Which of the following methods is used to extract a session ID from a manually rewritten URL?

A. getParameter(String name)

B. getSessionID()

C. getPathInfo()

D. getID()


2. Which of the following is a legal exception-type description?

A. javax.servlet.ServletException

B. ServletException

C. javax.servlet.http.UnavailableException

D. UnavailableException


3. Which of the following options best defines the full signature name for the servlet method associated with a POST request? (Choose all that apply.)

A. protected void doPost(HttpServletRequest req, HttpServletResponse res) throws IOException, ServletException

B. public void doPost(HttpServletRequest req, HttpServletResponse res) throws IOException

C. public void doPost(ServletRequest req, ServletResponse res) throws IOException, ServletException

D. private void doPost(HttpServletRequest req, HttpServletResponse res) throws IOException, ServletException


4. Which of the following statements is true?

A. The listener tag is used to define all context and session listeners.

B. The listener interface name must be defined within the deployment descriptor.

C. The HttpSessionActivationListener must be defined within the originating server only.


5. When using FORM authentication, your form writes to which of the following URLs?

A. /servlet

B. j_security

C. j_security_source

D. j_security_check


6. A taglib directive must define which of the following attributes?

A. value

B. prefix

C. uri

D. uri and location

E. uri and prefix


7. Which of the following commands would best create a WAR file for a web application whose context is defined as /webapps/stocks?

A. jar -tvf stockApp.war /webapps/stocks

B. jar -cvf stockApp.war /webapps/stocks

C. war -cvf stockApp.war /webapps/stocks

D. jar -cvf stockApp.war /webapps/

E. Both war -cvf stockApp.war /webapps/stocks and jar -cvf stockApp.war /webapps/


8. Consider the following HTML page code:

<html><body>

<a href="/servlet/HelloServlet">POST</a>

</body></html>

Which method of HelloServlet will be invoked when the hyperlink displayed by the above page is clicked? (Select one)

a doGet

b doPost

c doForm

d doHref

e serviceGet


9. What is the term for determining whether a user has access to a particular resource? (Select one)

a Authorization

b Authentication

c Confidentiality

d Secrecy


10. Which of the following variables is not available for use in EL expressions?

a param

b cookie

c header

d pageContext

e contextScope



=>





  1. J2EE Patterns

  2. SCJP 310-035 Sample Questions (1)

  3. Answers of "SCJP 310-035 Sample Questions (1)"

  4. Catalog of design patterns


=>

Network Articles

  1. CISCO

=>

Unix Articles



  1. Linux User Guide (1)

  2. Linux User Guide (2)



=>


Comics And Humor



  1. Boss

  2. Quotes


=>

Boss wants to blog


Yahoo CTO is leaving

 
Aged at 45, Mr. Nazem, 45, CTO of Yahoo, will follow a turbulent year at Yahoo that included a companywide revamping.
 
 
 
 
 

Quotes

Hope is tomorrow's veneer over today's disappointment.
 
 

Thursday, May 31, 2007

How to query server and client environment in PL/SQL

To query the oracle server environment variables inside a pl/sql block that were set when the database was started:

$ . oraenv
ORACLE_SID = [oracle] ? ora1022
$ /usr/ucb/ps auxwwe | sed -n
'/[o]ra_smon_ora1022/s/\(.*\)\(TZ=...\)\(.*\)/\2/p'
TZ=MET
$ export TZ=GMT
$ printf "%s\n" "set lines 10" "var f varchar2(40)" "set autop on" \
> "exec dbms_system.get_env('TZ',:f)" |
> sqlplus -s / as sysdba


To query the client or listener environment, use:

SQL> var f varchar2(40)
SQL> set autop on
SQL> exec sys.dbms_system.get_env('TZ',:f);

or

SQL> $echo %ORACLE_HOME%

Market Articles



  1. All Money Sense

  2. 4 ways to be a millionaire

  3. Google's market share jumped in April

  4. RBS group raises the stakes in ABN Amro bidding

  5. Yahoo CTO is leaving

  6. Dollar Cost Averaging

  7. PSP, XBox, Zen Skins


=>

J2EE Articles




=>

MSSQL Articles



  1. MSSQL 2005 Knowledge Base 1

  2. MCTS 07-431 Sample Questions (1)

  3. Configure MSSQL to use more than 2GB RAM


=>

Wednesday, May 30, 2007

Oracle Articles



=>

Answer - "1Z0-042 sample questions"

Click here for original questions:
1Z0-042 sample questions


1. A. Tables share a namespace with views, sequences, private synonyms, procedures, functions, packages, materialized views, and user-defined types. Objects sharing a namespace cannot have the same name.

2. C. The database is the parameter supplied after the port designation. Therefore, you connect to the orcl database.

3. C. The Memory Monitor (MMON) process gathers performance statistics from the SGA (System Global Area) and stores them in the AWR. MMNL (Memory Monitor Light) also does some AWR- related statistics gathering, but not to the extent that MMON does. QMN1 is the process that monitors Oracle advanced queuing features. MMON is the process that dynamically manages the sizes of each SGA component when directed to make changes by the ADDM (Automatic Database Diagnostic Monitoring)

4. C. STARTUP is not a valid state, but the command used to start the database. For more information.

5. C. The $ORACLE_HOME/install/portlist.ini file contains information about what ports are being used by the various Oracle tools.

6. D. You assign or change comments on a column with the COMMENT ON COLUMN statement. The COMMENT ON TABLE statement is used to add or change the comment assigned to a table.

7. D. The Undo Advisor screen uses the desired time period for undo data retention and analyzes the impact of the desired undo retention setting.

8. B, C, D. The substitution variable %d, which represents the database ID, is required only if multiple databases share the same archive log destination.

9. D. All the four calendaring expressions execute a schedule every Dec. 28 at 8 p.m. ¡°BYYEARDAY=-4" or ¡°BYMONTH=DEC; BYMONTHDAY=28" specifies the date and month for the interval Though all four are correct, the most meaningful and easy to understand would be item 1 or 4.

10. B

11. D

12. D.

13. C In Oracle10g, you can now recover datafile copies by applying changed blocks from a change tracking file to the datafile image copy. This is an important feature as it will significantly speed up datafile recovery times. It is done in 2 stages:1. Use an RMAN command to update the datafile image copy with changed blocks:RMAN> recover copy of datafile ;2. Apply any archived redo logs to fine-tune the datafile to the exact point-in-time or SCN.

14. D

15. A,D When implementing an RMAN-based backup strategy, you can use RMAN moreeffectively if you understand the more common options available to you. Many of thesecan be set in the RMAN environment on a persistent basis, so that you do not have tospecify the same options every time you issue a command.To simplify ongoing use ofRMAN for backup and recovery, the RMAN lets you set a number of persistentconfiguration settings for each target database. These settings control many aspects ofRMAN's behavior when working with that database, such as backup retention policy,default destinations for backups to tape or disk, default backup device type (tape or disk),and so on.

=>


Answer - "1Z0-043 sample questions"

Click for original questions:
1Z0-043 sample questions


1. A, C. The DBID and DB_KEY are required to identify the database incarnation when using SQL*Plus to query the recovery catalog tables.

2. C. The correct command sequence for recovering a missing tempfile named temp is as follows:
1. STARTUP MOUNT
2. DROP TABLESPACE temp
3. CREATE TEMPORARY TABLESPACE temp TEMPFILE
The database must be mounted, and then the tablespace information needs to be dropped from the data dictionary. Then the tablespace can be created.

3. B. You can now recover through an incomplete recovery, which uses RESETLOGS to open the database. In previous Oracle versions, you had to take a backup immediately following an incomplete recovery, because the redo log sequences got reset, making the backup unusable.

4. B. The Flashback Query will query the deleted customer 46453 from the undo data, and the insert command will add the customer back to the customers table. There will be minimal impact on the database.

5. D. The AWR does not store optimizer statistics. It stores dynamic performance statistics. Optimizer statistics are stored in the data dictionary.

6. B. The most likely cause is that the Oracle client environment is using a character set that does not match the server and is not a strict subset of the server character set. In this situation, Oracle will perform automatic data conversion, which can impact performance.

7. A. A simple plan can allocate CPU resources for up to eight consumer groups at the same level.By default, SYS_GROUP will be allocated 100 percent of level 1 CPU, and all other CPU allocation is done at level 2. Therefore, a simple plan will meet all of these requirements.

8. C. Server-generated alerts would be the best answer. Oracle has a predefined alert that detects ORA-00600 messages in the alert log and will raise an alert when they are found.

9. C. Block change tracking allows RMAN to back up only changed blocks from the last backup. The blocks are identified in a journaling system to expedite the process.

10. C. Multiplexing a backup is designed to improve the performance of the backup sets by copying multiple database files at the same time. Multiplexing can be used with image copies or backup sets.

11. C. Compressed backups work only with backup sets, not image copies. Thus compressed backups will work only with the BACKUP command.

12. A. The missing redo log must first be dropped even though it doesn't exist physically in the file system. This removes the redo log metadata from the data dictionary. Next the log can be added back to database.

13. A, B, C. You need two credentials when running a recovery with EM: the correct operating system account and the correct database account. The correct operating system account is an account similar to the Oracle account in Unix or the administrator account in Windows. The database account is any account that has SYSDBA privilege.

14. A. The view FLASHBACK_TRANSACTION_QUERY is used as a diagnostic tool to identify version information about transactional changes to the database. This view can be used to view the DML statements that were executed against a row and in a specific table.

15. C. The DBVERIFY utility uses the term pages instead of blocks. The DBVERIFY utility determines the amount of corrupt pages in a datafile.


Latest Posts