Logic Quest

on the quest for logic in ...

Monday, February 28, 2005

Does Anyone know marketing - This author surely does not

This 10 page article can be summarized into:
"You are your own marketing"
with no substantial/convincing proof or argument.

Links

Friday, February 25, 2005

Using Aspects as a Test Coverage Tool

got this from rants.
This author has moved to a new site. So keeping this snippet here in case the contents disappear.

I was working on the Groovy project and I wanted to know what tests covered a method I was modifying. I tried to use clover to do this but it gave me way too much information and most of it wasn't that useful. Maybe AspectJ could help?


We are using JUnit for Groovy so all tests happen to extend a particular base class and each test method has a particular naming convention. My initial design was simple:

1) define a pointcut for test methods
2) define a pointcut for all methods in the test methods call stack
3) keep track of which methods are called for which tests

This is actually quite easy in AspectJ:


    private static final boolean enabled = Boolean.getBoolean("groovy.aspects.coverage");
    pointcut inTestClass(TestCase testCase) : this(TestCase) && execution(void test*()) && this(testCase);
    private Map coverage;
    
    before(TestCase testCase) : if(enabled) && cflowbelow(inTestClass(testCase)) && execution(* *(..)) {
      String testname = testCase.getClass().getName();
      String methodSignature = thisJoinPointStaticPart.getSignature().toString();
      Set tests = (Set) coverage.get(methodSignature);
      if (tests == null) {
tests = new HashSet();
coverage.put(methodSignature, tests);
      }
      tests.add(testname);
    }

This gets me most of what I need. Unfortunately, in our Groovy build, each JUnit test is called in a separate VM so we can't just build up a big map and be done with it. I thought about a few different ways to deal with this. I could have an external persistence mechanism or I could have one output file per test. I didn't like the idea of having a million little files all over the place because it would be hard to search them quickly. So I downloaded Berkley DB got about 5 pages into the API and realized some sort of crazed non-Java C programmer wrote it. Well, that was out. Instead I brute forced it. I added two more pieces of advice:


    before(TestCase testCase) : if(enabled) && inTestClass(testCase) {
      try {
File file = new File("results.ser");
if (file.exists()) {
ObjectInputStream ois = new ObjectInputStream(new FileInputStream(file));
coverage = (Map) ois.readObject();
ois.close();
} else {
coverage = new HashMap();
}
      } catch (Exception e) {
e.printStackTrace();
      }
    }

    after(TestCase testCase) : if(enabled) && inTestClass(testCase) {
      try {
File file = new File("results.ser");
ObjectOutputStream oos = new ObjectOutputStream(new FileOutputStream(file));
oos.writeObject(coverage);
oos.close();
      } catch (Exception e) {
e.printStackTrace();
      }
    }

Good old fashioned serialization to the rescue. Before running a test in a test case I load the old results, after running a test in a test case, I write the new results back out to disk. It can get pretty slow as you get towards the end of the run, but I figure that I can optimize it later. The easiest way to optimize it would be to append maps to the file and then crunch them all together when you load. That would save a lot of swapping it in and out of memory, but this is just a prototype. After making this exquisite gem, I apply it to the groovy.jar, the junit.jar, and the test classes. Notice that because Groovy compiles down to Java class files, this works for Groovy methods as well. Isn't having one bytecode format grand! So then I run all the tests and get my "results.ser" file. What to do with it? Well, process it with Groovy of course! Here is the simplest script I could come up with to do what I want:


import java.io.*;
map = new ObjectInputStream(new FileInputStream(args[0])).readObject();
map.findAll {
  if (it.key =~ args[1]) {
    return it;
  }
}.each {
  println it.key + ": " + it.value;
}

You pass it the "results.ser" file and a regular expression to match against method signatures and you get a list of signatures and all the tests that use them. Here is an example of the output once you are done:


Groovy:> groovy coverage.groovy results.ser bind
Object org.codehaus.groovy.sandbox.markup.StreamingMarkupBuilder.bind(Object): [org.codehaus.groovy.sandbox.markup.StreamingMarkupTest]
Object org.codehaus.groovy.sandbox.markup.BaseMarkupBuilder.bind(Closure): [DOMTest, org.codehaus.groovy.sandbox.markup.StreamingMarkupTest]
Object org.codehaus.groovy.sandbox.markup.StreamingDOMBuilder.bind(Object): [DOMTest]

So if I was changing the BaseMarkupBuilder.bind method, I would know that I have to run at least DOMTest and StreamingMarkupTest to make sure that I didn't regress. This feature is something that can readily go into an IDE like Eclipse. You modify a method, it looks at the call hierarchy of the tests or this runtime generated file, determines which tests need to be run, and the launches them in the background after you build. If anything happens you get the red squiggles on your method with the results of the test attached. Talk about iterative development! The XP people can even go the other way. Write all your tests and keep fixing the code till the squiggles go away and not only does it build, but it runs! I'm telling you, something like this is the next step the IDEs will have to take.

Links

Thursday, February 24, 2005

Focus on specific tasks during project initiation

got this from TechRepublic

Focus on specific tasks during project initiation


Within each project management phase, there are tasks that are crucial to the project's success. This is especially true for initiation tasks because many of the decisions that you make during this phase are precursors for steps you'll take in later project stages.

Although all tasks you work on during initiation are important, you should devote extra attention to the following items:


Project manager selection: This may be the most important task of the project. The key factors to consider when making your decision include verifying appropriate knowledge, specialty skills, and experience that are commensurate with the perceived needs of the project.
Project team selection: This is perhaps the second most important initiation task, yet managers rarely put much thought into it. When choosing team members, keep in mind that you should identify which subject matter experts you'll need and then negotiate their availability.
Project charter: This gives legitimacy to the project and identifies its goals and objectives. The goals and objectives list is the first pass at the project scope. It's also important to identify the business needs and project deliverables--even if it's only at a high level. This gives weight to discussions with project stakeholders and may help identify project champions.
Financial analysis: The results of the initial financial analysis (which is conducted during the prioritization phase) set the standard for how the project will benefit the organization. The analysis needs to clearly demonstrate this benefit to potential project stakeholders.
Assumptions and constraints: You must create a document that identifies assumptions and constraints (to the extent that they can be documented). This document can be the key to further refining scope and deliverables during later project stages.
Project repository and historical information: The project manager should establish a common and easily assessable document repository for housing project documents and other information. Historical information from prior projects that are similar in nature can provide a treasure trove of useful information for your project.
Authority, roles, and responsibilities: The project manager is ultimately responsible for the project's delivery. Beyond that, project teams can take on a wide variety of organizational structures. You must identify and establish the project manager's authority, its source, and how to leverage the manager's authority. It's also imperative to identify other team members' roles and responsibilities in order to understand how they'll interact with the overall project team. Finally, it is always good for the project manager to work with the team members' functional managers to ensure resource availability when needed.

If you happen to be in the unfortunate situation where project initiation is fast tracked, you should focus your attention on selecting the right project manager and team for the job.

Project initiation is the time to be as task oriented as possible. The more thorough you can be with laying the groundwork for your project early on, the better off your project will be in the long run.

Scott Withrow has more than 20 years of IT experience, including IT management, Web development management, and internal consulting application analysis.


Links

Export or import with Oracle Data Pump

In Oracle 10g, exp and imp have been redesigned as the Oracle Data Pump (although Oracle still ships and fully supports exp and imp). If you're used to exporting exp and imp, the Data Pump command-line programs have a syntax that will look very familiar.

Data Pump runs as a job inside the database, rather than as a stand-alone client application. This means that jobs are somewhat independent of the process that started the export or import. One machine (say a scheduled job) could start the export, while another machine (such as a DBA's laptop) can check the status of the job. Since the job is inside the database, if you want to export to a file, the first thing that you must do is create a database DIRECTORY object for the output directory, and grant access to users who will be doing exports and imports:

create or replace directory dumpdir as 'c:\';
grant read,write on directory dumpdir to scott;

Once the directory is granted, you can export a user's object with command arguments that are very similar to exp and imp:

expdp scott/tiger directory=dumpdir dumpfile=scott.dmp

While the export job is running, you can press [Ctrl]C (or the equivalent on your client) to "detach" from the export job. The messages will stop coming to your client, but it's still running inside the database. Your client will be placed in an interactive mode (with Export> prompt). To see which jobs are running, type status. If you run expdp attach=, you can attach to a running job.

Data Pump doesn't necessarily have to write to files. Now there are options to allow you to export database objects directly into a remote database over SQL*Net. You simple specify the remote option with the connect string of the remote database. This is something like a one-time database replication job.

Data Pump is much faster than the old exp and imp client commands. One new feature that really helps make it faster is the "parallel" option. With this option, the Data Pump will pump data in four different threads. For example, I ran the following job, pressed [Ctrl]C, and queried the status of the background jobs:

expdp scott/tiger directory=dumpdir dumpfile=scott2.dmp parallel=4
job_name=scott2

Export: Release 10.1.0.2.0 - Production on Friday, 31 December, 2004 14:54

Copyright (c) 2003, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 -
Production
With the Partitioning, OLAP and Data Mining options
FLASHBACK automatically enabled to preserve database integrity.
Starting "SCOTT"."SCOTT2": scott/******** directory=dumpdir
dumpfile=scott2.dmp parallel=4 job_name=scott2
Estimate in progress using BLOCKS method...

Export> status

Job: SCOTT2
  Operation: EXPORT
  Mode: SCHEMA
  State: EXECUTING
  Bytes Processed: 0
  Current Parallelism: 4
  Job Error Count: 0
  Dump File: C:\SCOTT2.DMP
 & &  bytes written: 4,096

Worker 1 Status:
  State: EXECUTING

Worker 2 Status:
  State: WORK WAITING

Worker 3 Status:
  State: WORK WAITING

Worker 4 Status:
  State: WORK WAITING

Not only is the Data Pump running inside the database, but also, most of the command-line features are exposed from inside the database through a PL/SQL api, DBMS_DATAPUMP. For example, you can start the export job from a PL/SQL package with the following PL/SQL code:

declare
 & &  handle number;
begin
 & &  handle := dbms_datapump.open('EXPORT','SCHEMA');
 & &  dbms_datapump.add_file(handle,'SCOTT3.DMP','DUMPDIR');
 & &  dbms_datapump.metadata_filter(handle,'SCHEMA_EXPR','= ''SCOTT''');
 & &  dbms_datapump.set_parallel(handle,4);
 & &  dbms_datapump.start_job(handle);
 & &  dbms_datapump.detach(handle);
end;
/

Check out Data Pump to learn about many of its other great new features. For instance, Data Pump contains features with the ability to rename datafiles, move objects to different tablespaces, or select schema objects or schemas using wildcard patterns or expressions. The Data Pump can also act as an interface to external tables (i.e., a table can be linked to data stored in a data pump export file like the Oracle Loader interface available since Oracle 9i).

Scott Stephens worked for Oracle for more than 13 years in technical support, e-commerce, marketing, and software development.


Links

Saturday, February 12, 2005

interesting way to achieve high GoogleRank

this is an interesting way to achieve high GoogleRank

Links