Q. What is table partition?
A: SAP is using fact table partitioning to improve the performance. you can
partition only on 0CALMONTH or 0FISCPER
Q. What are the options available in transfer rule and when ABAP code is
recQuired during the transfer rule what important variables you can use?
A: Assign info object, Assign a Constant , ABAP routine or a Formula
Q. How would you optimize the dimensions?
A: Use as many as possible for performance improvement; Ex: Assume that u have
100 products and 200 customers; if you make one dimension for both ,the size of
the dimension will be 20000; if you make individual dimensions then the total
number of rows will be 300. Even if you put more than one characterstic per
dimension, do the math considering worst case senerio and decide which
characterstics may be combined in a dimension.
Q. What are the conversion routines for units and currencies in the update
rule?
A: Time dimensions are automatically converted; Ex: if the cube contains
calender month and your transfer structure contains date, the date to calender
month is converted automatically.
Q. Can you make an infoobject as info provider and why?
A. Yes, When you want to report on characterstics or master data, you can make
them as infoprovider. Ex: you can make 0CUSTMER as infoprovider and do Bex
reporting on 0 CUSTOMER;right click on the infoarea and select 'Insert
characterstic as data target'.
Q. What are the steps to unload non cumulative cubes?
A: 1. Initialize openig balance in R/3(S278)
2. Activate extract structure MC03BF0 for data source 2LIS_03_BF
3. setup historical material docus in R/3.
4. load opening balance using data source 2LIS_40_s278
5. load historical movements and compress without marker update.
6. setup V3 Update
7. load deltas using 2LIS_03_BF
Q. Give step to step approach to archiving cubex.
A: 1. double click on the cube (or right click and select change)
2. Extras -> Select archival
3. Choose fields for selection(like 0CALDAY, 0CUSTOMER..etc)
4. Define the file structure(max file size and max no of data objects)
5. Select the folder(logical file name)
6. Select the delete options (not scheduled, start automatically or after
event)
7. activate the cube.
8. cube is ready for archival.
Q. What are the load process and post processing?
A: Info packake, Read PSA and update data target, Save Hierarchy, Update ODS
data object, Data export(open hub), delete overlapping reQuests.
Q. What are the data target administration task
A: delete index, generate index, construct database statistics, initial fill of
new aggregates, roll up of filled aggregates, compression of the
infocube,activate ODS, complete deletion of data target.
Q. What are the parallel process that could have locking problems
A: 1. heirachy attribute change run
2. loading master data from same infoobject; for ex: avoid master data
from different source systems at the same time.
3. rolling up for the same info cube.
4. selecting deletion of info cube/ ODS and parallel loading.
5. activation or delection of ODS object when loading parallel.
Q. How would you convert a info package group into a process chain?
A: Double Click on the info package grp, click on the 'Process Chain Maint'
button and type in the name and descrition ; the individual info packages are
inserted automatically.
Q. How do you transoform Open Hub Data?
A: Using BADI
Q. What are the data loading tuning one can do?
A: 1. watch the ABAP code in transfer and update rules;
2. load balance on different servers
3. indexes on source tables
4. use fixed length files if u load data from flat files and put the file
on the application server.
5. use content extractor
6. use PSA and data target inparallel option in the info package
7. start several info packagers parallel with different selection options
8. buffer the SID number ranges if u load lot of data at once
9. load master data before loading transaction data.
Q. What is ODS?
A: Operations data Source . u can overwrite the existing data in ODS.
Q. What is the use of BW Statistics?
A: The sets of cubes delivered by SAP is used to measure performance for
Query, loading data etc., It also shoes the usage of aggregates and the cost
associated with then.