Friday, November 13, 2009
Two hours effective work was enough to
- install the product (including reading how to do it),
- installing IIS on Windows 7 (you need a modern advanced OS)
- implementing security
- adding a model (PoC) with a hierarchy
- reading most of the documentation
Books on-line is sparse, so I haven't really done a real walk-through yet. I hope a tutorial is coming soon, or I might post one here (after getting to know the thing better).
Wonder if I can run a hybrid model where I can have some entities work as in the "registry"-model, and some as in the "Central Master Data Repository"-model.
I would like to put all my meta data into this thing, hope it is mature enough.
I went to a couple of sessions on SQL PASS, but they were introductory, can't be many that have experience with this thing.
Anyhow, have a nice day.
Tuesday, November 3, 2009
Q: Is there a performance hit on the "select"-side because of some fragmentation of the data base files caused by auto growth?
A: No. But you might want to enable "Perform volume maintenance tasks"
We have a generic script-based deployment of many data bases, and therefore have auto-growth on them, so they are created in exactly the same way. There are a total of ten data bases with twelve files each, so 120 files. The files all start at 100MB and grow by 25%. To give each data base special treatment makes changes more expensive (We charge by the hour).
1) Keep scripts as is.
2) Enable "Instant Initialization"
a) open a command window and run "sc qc
b) note the user name from "SERVICE_START_NAME" attribute.
c) Fire up ocal policy editor, start -> run -> gpedit.msc
d) navigate to "Computer Configuration\Windows Settings\Security Settings\Local Policies\User Rights Assignments", open "Perform volume maintenance tasks"
e) add the user from step b)
f) restart your sql server instance (not sure if this is needed, but seems a good idea)
and thats all,have a nice day
btw:I will not mention the name of the expert that helped me, because the person can not make public advice because of legal reasons. And things might have been distorted by me translating the advice into this post.
Thursday, September 17, 2009
I work with Business Intelligence and use slowly changing history (wikipedia) extensively. We have a multi-layered topology and use multiple modeling techniques, including "data-vault", "star-schema" and relational model.
The way we implement history…
- It requires one or more identified unique key fields in every source system table
- One row in the original source system initially produces two rows in the destination (for each layer in the topology layer)
- Change in any field in the source produces a new row in the destination
- Deletes in the source creates new rows in the destination
This way a row in the source system can easily be ten or more in the data warehouse. With modern hardware this is not really a problem, even if we get into the terabytes.
The problem comes when we get errors in the history-mechanisms. A typical problem can be that one static row in a source system could result in a new row in the destination for every load even if there are no real changes.
If introduced in an early layer, this will spread like the plague in the warehouse (and not like bird flu, either).
These errors take time to solve and requires expensive talent.
We now have a set of functionality that automatically fixes things up and centralize any misbehaving data in our meta data repository or elsewhere.
We can use the talent to solve more problems interesting to the customer and be less proactive (more execution-focused and reactive) on this technical problem because problems will not grow severe.
I find data warehousing pretty nice, since there is untouched ground here and there with great promise. Of course, the technical side of these projects is not necessary the hardest to deliver on, but it would be sad if it was the show stopper.
I also like some metaphors used in the sport, especially the time line (I'll use it next time).
Wednesday, August 12, 2009
Hardware is Virtual PC image on Vista/64 laptop.
- No UAC
- 1GB RAM for the image
- 10 logical disks on three image files, 64Kb cluster size for data drives
- Separated prog files db files, temp files and log files
I installed SQL Server engine and Integration services and tried some complex SSIS packages created for 2008. I use
- custom data flow components
- script based deployment
- hierarchy of packages (packages calling other packages)
- custom Meta Data Repository
- advanced configuration
- SQL CLR TVF
- …other potentially problematic things
Packages implements ETL for an enterprise data warehouse.
Things seem to work as is, product looks better than I expected. I get a RC feeling using this thing, if only things performed in the virtual environment, but it still does not, all GUI tools are IMO slow, but this might be because of fragmentation on disk holding vhd. I'll fix fragmentation and have another look. I'll also try an ongoing "appliance project" I have, that demands compatibility with on many parts of the internal SQL Server meta models.
I have a good feeling with the new SQL Server.
That's it; have a good one,
Monday, May 25, 2009
This is the shortest program I have written, it is two bytes:
The task of this program is to execute the parameter as a program, I use this inside Visual Studio on the "Open With"-menu to call cmd-scripts inside the GUI.
The only program that comes close to this is a 5-byte boot program I wrote for 8086 assembly about twenty years ago.
Saturday, March 7, 2009
powercfg.exe /hibernate off
This will remove your hibernation file, which, in 2008 server makes no sense anyway. Gains the amount of disk space as you have installed RAM. I just got another 4GB with this command, and feel like starting a defrag.