April 12-13, 2010: Delphi / DataSnap Development Masterclass in UK

This 2-day Delphi 2010 / DataSnap 2010 masterclass in Upavon, Wiltshire, UK on April 12-13, 2010, is organised together with the UK Developers Group, featuring Bob Swart as presenter, and currently open for bookings (we have some registrations already and limited space, so don't wait too long to book your own place ;-)). The first day of the masterclass covers new features in Delphi since 2007, using Windows 7 (because some of the features require Windows 7), the second day is a fully hands-on day using DataSnap.
Read More

Delphi 2010 RTTI Contexts: how they work, and a usage note

Delphi 2010 includes extended support for RTTI, also known as run-time type info or reflection. Many design approaches that have previously only been possible with managed languages like C# and Java because of the code annotation and introspection they required should now be possible in the Delphi world. Something somewhat interesting about how the RTTI works is its approach to object pools. Delphi isn't a garbage collected language, so users need to be careful to free objects when they're no longer needed, either explicitly, or by designing or using some kind of ownership scheme, such as that used by TComponent, where the Owner takes care of destruction. Type information usage scenarios don't mesh particularly well with a TComponent-style of ownership. Typically, when working with RTTI, you want to do some kind of search for interesting objects, do something with them, and then go on your way. That means that many objects may get allocated for inspection, but not actually be used. Managing those objects' lifetimes independently would be tedious, so a different approach is used: there is a single global RTTI object pool. While there is at least one RTTI context active in the application, this object pool keeps all its objects alive. When the last context goes out of scope, the objects get freed up. The pool management works by using a Delphi record that contains an interface reference. The first time any given RTTI context is used, it fills in this interface reference. It can't fill it in any later than first use, because Delphi records don't support default constructors, which besides have their own problems. For example, how do you handle exceptions in default constructors, in all the places they can occur? Allocating arrays, thread-local variables, global variables, global variables in packages, temporary objects created in expressions, etc. It can get ugly, and in C++ it sometimes does. So, this first use allocates an interface, called a pool token. It acts as a reference-counted handle to the global object pool. For so long as this interface is alive, the the global object pool should stay alive. Even if the RTTI context is copied around, Delphi's built-in interface handling logic, designed along COM principles, will ensure that the interface doesn't gets disposed of prematurely or get its reference count muddled up. And when an RTTI context goes out of scope, either by being a local variable in a function that is exited, or a field in an object that is freed, the reference count is reduced. When it hits zero, the pool is emptied. The biggest upside of this approach is that RTTI usage should feel reasonably cheap, conceptually speaking. Code need only declare a variable of the appropriate type, and start using it: procedure Foo; var ctx: TRttiContext; t: TRttiType; begin t := ctx.GetType(TypeInfo(Integer)); Writeln(t.Name); end; A downside, however, is that lazy initialization can create a gotcha. Imagine this scenario: Library A declares an RTTI context A.C User code B declares an RTTI context B.C B pulls some RTTI objects O out of B.C, in order to hand them to library A B.C goes out of scope Library A now tries to work with O, but discovers much to its surprise, that the objects have been prematurely disposed, even though A already has an RTTI context, A.C The problem is that A never used A.C, so it never allocated a pool token. When B.C used its context, the pool came into being, and objects O were assigned to it; but after B.C went out of scope, the objects got freed. The solution to this problem is for Library A, knowing that it has a long-lived RTTI context and it expects to communicate with third-party code which allocates objects from its own RTTI context and hands them back, it should ensure that the long-lived context's pool token is allocated. A trivial way to do this is like this: type TFooManager = class FCtx: TRttiContext; // ... constructor Create; // ... end; constructor TFooManager.Create; begin FCtx.GetType(TypeInfo(Integer)); // ... end; This will allocate only a bare minimum of RTTI objects, those needed to represent the type System.Integer, but more importantly, will ensure that FCtx has a pool token and will keep the global RTTI pool alive. In future releases of Delphi, the static method TRttiContext.Create will make sure that its return value has a pool token allocated; currently, it does not. TRttiContext.Create was originally defined to make the TRttiContext record feel more like a class for people unfamiliar with the idiom of using interfaces for automated deterministic lifetime management. The corresponding TRttiContext.Free method disposes of the internal pool token, and should remain the same.
Read More

The PDA Today

It’s a little off topic but I’ve been using it so much these days I thought it worth a mention.

As mentioned in another post, I’ve used PDA devices since the late 90’s starting out with the Psion 5 before I moved on to the very successful palm pilot, specifically the Palm Tungsten T. I switched to windows mobile with the purchase of the Palm Treo 700w a few years later and in the last year I have had an iPhone and a Blackberry and now back to an iPhone.

My favourite was the palm tungsten with the great screen and huge array of very useful applications. It never left my side. It wasn’t a phone at all but it was a very practical PDA device. I admit I’ve been struggling to find a replacement as genuinely practical as that little device since.

The Treo 700w running Windows Mobile I found frustrating. The tiny screen did not allow the lengths that I took the Tungsten to with, for example, running spreadsheets and viewing the calendar showing handy icons (in a third party application). And the little keyboard took ages to learn effectively. I admit to reverting to Palm’s hand writing characters as a preferred way of entering a reasonable amount of text.

When I got my first iPhone I was dismayed. I couldn’t even find a spreadsheet – one of my most useful tools. There was no copy and paste which was screamingly annoying and it just didn’t have the power and usefulness I was… well… used to. Then, surprisingly, over the next couple of months I grew to understand the iPhone. It wasn’t trying to be a PC and it did have it’s limitations, but accept that and you ended up with something, although not nearly as powerful as other systems I’ve used, was very practical all the same. The penny had dropped and I finally understood the iPhone.

The iPhone was supplied by the contract I was in at the time and when that was completed I handed that back.

The next contract supplied a Blackberry Bold 9000 as a standard device. Aha, I thought, back into the “blue suit” of devices. The Blackberry was the true workhorse. No time for nonsense like enjoyment. In my role controlling several development teams around the world and the fast push email processing I was getting emails all day and night. But the workhorse was solid and reliable … Once I had ironed out the bugs.

Turns out that the very limited memory for applications meant that every application had to be specifically closed down or the memory would soon fill up. Filling up the memory would slow the system down so much the device would be totally unusable until you were able to either reset the Blackberry, or wait long enough for each painful command until you got to a point where you could shut down enough apps for the system to work again – and even then sometimes it still needed a reset.

This also prevented loading other applications so I was left with a pretty boring, yet (if I remembered to close each app regularly) a pretty effective one.

One feature with the Blackberry that took me a while to get used to was that it didn’t have a touch screen. No stylus, no finger pointing, just a little trackball controlling a cross-hair cursor. I will taper this with the fact that I’m an experienced PDA user and was forever trying to “tap” a link rather than move the cross-hair to it and “select” the link. I never got used to that but passed it into a learning experience, like learning to drive an automatic without trying to depress the clutch; or learning to be a passenger without trying to use the brake pedal.

I did have a pretty major issue with it though. As I received so many emails that demanded my reply, I was annoyed to find that the Blackberry would “send me” every email I sent. In other words, not long after I fired off a quick reply I would receive another email. With so many emails coming in I would be forced to bring out the Blackberry again and check, only to find it was the email just sent and if I didn’t open it to read it the Blackberry would forever show up that I had xx unread emails. Not something that endeared me to it after a while. I could not find a way around that issue.

Recently I had an opportunity to purchase another phone so I looked at all the offerings.  Having a long and enjoyable relationship with Palm products I really wanted to get the Palm Pre but it’s still not available in Australia with no signs of it ever being released here.

I opted for the simplicity and sheer “nifty-ness” of the iPhone. The new 3Gs version was out here which answered a lot of my earlier issues. I am so far quite pleased with it. It’s taken me a while to learn to type on it (all my recent blogs have been written on the iPhone and transferred to the PC for spelling and formatting. It keeps insisting to replace Aus words with their American spelling so I have to keep an eye on it. World domination ain’t here yet people and we spell our words using S’s and there’s no such thing as a Zee, they’re Zed’s

But back to the iPhone and I find that the number and power of apps has increased dramatically in the last year. Yes there is even a spreadsheet now and clunky as it is, I can use copy and paste at last. I enjoy the wireless connection meaning the huge amount of data I paid up front for, hardly gets used at all as the danged thing keeps finding home, work, and free wireless connections to use.

Below, a few screen dumps of some of my more favourite iPhone apps.

Read More

Read More

More on Cloud Computing

A few posts ago I spoke on attending a Cloud Computing seminar put on by NetSuite. Since then I have looked into this a little more and hope to dispel some of the misconceptions of Cloud Computing.

The idea sounds great, and it’s certainly the buzzword of the week if we believe the hype.

“But hang on,” I hear some of you saying, “Cloud Computing is just having apps on the Internet and we’ve had that for years. Why should it be different just because someone wanted to get their PhD by coming up with a new name for old technology?” …and you’d be right in asking that, its a pretty legitimate question in my book.

Definition

So let’s define Cloud Computing precisely, even more than we did in my post of a few days ago. To be considered a true Cloud Computing system, an application must satisfy all of the following criteria:

  1. Application on the Internet. It must reside on the publically available Internet. Publically available does not mean anyone has access to your systems, you will still need to log in. Now there is such a thing as an Internal Cloud, and even a Private Cloud, but for the purposes of this paper I’m going to limit this to full public access applications. That is: externally hosted applications where you can use and store your information on the externally provided system (e.g. salesforce.com).
  2. Data on the Internet. The data you place into it must reside on the Internet, although obviously it must also be secure so that only you have access to your Data.
  3. No to little up-front costs. You are not purchasing software licences or additional hardware. Sometimes you may purchase consulting services to assist in converting your systems and data. In some extreme cases like perhaps a corporate wide accounting system, additional consulting and training may be necessary.
  4. Nothing is installed on your computer apart from a web browser and perhaps some browser additions like adobe PDF viewer.
  5. Costs are consumption based. In other words pricing is charged per hour; gigabyte; or hits per month. The less you use it the less it should cost you, and the reverse is also true.
  6. On-demand. The service should, in its minimum configuration, be able to be set up by the user for use that day. Of course in very large and complex corporate systems this may take planning and often highly specialised consulting services.
  7. Scalable. As far as the user is concerned they shouldn’t have to worry about infrastructure at all. They should be able to increase from megabytes to terabyte throughput without having to organise storage or backups, extra staff, servers, or any of the other hassles.

Advantages of Cloud Computing

This definition of Cloud Computing shows up a number of areas of cost savings, reduced hassles, and sometimes increased functionality over in-house systems. These include:

  • Costs can be avoided or deferred. In most cases, increasing both functionality and capacity should be totally transparent  process. No server purchases or increased IT management. The regular billing cycles of the Cloud Computing model allow businesses to accurately forecast their IT budget based on known consumption levels.
  • Increases a business ability to change. The on-demand model inherent in Cloud Computing enables organisations to increase or decrease computing capacity without hardware, or IT management concerns resulting in no lag time while IT management orders the new hardware; installs the appropriate drivers; sets up the new cabling; tests the newly raided disks; increases the tape backup facilities; increases rack space; sets up active directory; and all the project work that comes with installing new capacity. The ability to then, just as quickly reduce that capacity without worrying about costly hardware lying idle is one compelling reason for the Cloud Computing model.
  • Faster ROI. The Cloud Computing model allows businesses to pay for only the resources it consumes and only as it consumes them. Businesses are able to see a faster return on their IT investment because there is no need to wait for the resources to be procured, provisioned, and managed.
  • Increased mobile workforce access. Your users will be able to access required business functionality without the overhead of network hardware, VPN software, and network management. Users will also be able to access their applications and data while at home, on the road, or in the office from any computer. Some Cloud Computing vendors (some through third party software vendors) allow access via mobile devices like the Blackberry or iPhone.
  • Additional expert IT staff. Highly-skilled professionals are available through the Cloud Computing SaaS (Software as a Service) company to operate and maintain their (your) service.
  • Increases business continuity by providing inexpensive disaster recovery options:. In some cases, cloud computing can be utilized as a viable disaster recovery option—especially for storage—thereby increasing business continuity.

On the definition of Cloud Computing given in the last section, businesses and individuals should never be concerned about backups, infrastructure, server space, firewalls, upgrades, storage, daily security patches or any of the plethora of other things that are nothing to do with running their business.

In summary, the ROI (return on investment) should be greater without the large up-front costs of infrastructure; software purchase and installation; and the manpower costs to manage it all. The infrastructure changes need no longer be a concern of the business allowing for both business growth and business reduction to occur without the penalty of either time lag and up front purchases or costly redundant and idle hardware. Also, business can plan their financial outlay with known, regular payments rather than up front large purchases.

It is also the nature of some businesses that sometimes additional infrastructure and computing power may be needed for short periods of time. Cloud Computing will be able to accommodate these bursts without the huge infrastructure and set up time costs required for something that will not be needed after the task has been completed.

Pitfalls

All of this of course sounds like a utopian situation but to the consumer there are pitfalls that they should take into consideration before embarking down that path. These can be summarized as the following:

  • Be sure of the SLAs. It is up to the consumer to be happy that the SLA will cover their requirements and that they can survive any unforeseen downtime or lack of service. Of course this downtime may also occur even when they have the best server rooms and the best of staff so normal disaster management plans should always be in place anyway.
  • Consider the SLA that you provide to your customers. Will holding their data off-line hold up if a customer questions an SLA they hold with you?  This is often overlooked in the rush for the savings and ease of adopting Cloud Computing.
  • Vendor lock-in. Will the provider allow you to access your data and how soon can they get it to you if you ask for it? Will they work with another provider to transfer your data if you ask for it? How easy will it be to transfer? Even if you could download your data, will it be accessible to you or will it be in a proprietary format only available from the provider?
  • How secure is your data from other eyes. It is possible that your closest competitor is, or will in the future be, using the same service that you are using and perhaps even shared resources. How will you know if your data has been stolen or hacked? Perhaps one way is to review the audit trail – if there is one. This may be able to let you know if someone is using an old forgotten login to access your data or if someone is having too much access that should be investigated (it may be a very valued employee so care must be taken as others may have obtained their login). How can you tell if a sysadmin has copied your data? What security is in place at the SaaS provider to ensure this does not happen? Are they open to an external audit of their security?
  • Backups. What backups are taken? If your data is found to be corrupt, how far back can you go to obtain valid data?
  • Deleted data. If you permanently remove some sensitive information, has it truly been removed? On what other systems has it been stored?
  • Can you download your own data? Even if you can, will you ever be able to access it or is it in a proprietary format only available from your provider?
  • Security. Many companies don’t even know how many computers connect to their data now, or what data reside on those computers and how and when they are accessed.

None of these items should stop you investigating Cloud Computing for your own organisation, however you should not abdicate your responsibilities to a third party provider. It is your data and your business that you are dealing with. It is up to you to not only obtain the cost savings that might ensure a good profitable business model for your company (or your employer’s company), but to ensure business continuity in the event of the unforeseen.

Many nay-sayers cite a few instances of data corruption or downtimes, however these must be put into perspective of supplying the service in-house. If an external audit was performed on your current in-house systems, would it pass muster? Have you complete security that your systems are safe and up to date with all patches? Do you know (I mean really know) who access your data now? If you had a disaster or fire on your premises that totally destroyed your server room and office, from an IT perspective would your business survive?

Although Cloud Computing has been around for a number of years, only recently under that name, it is still seemingly in its infancy. The take-up has not been rapid in some cases. While customers can hand over CRM and email systems to the Cloud, handing over the full enterprise system to the Cloud may be a little hard for most of us right now.

Read More

Read More

Mind Mapping

I have been using mind mapping software for many years now and it has often been a great boost to get clarity around thought.I know that I’ve often been told that my mind needs a map to get around and I’ve agreed with them. I know I  think the sam… … Read More

Read More

DataSnap: In-Process Server Method

DataSnap Server Method was introduced in Delphi 2009.  Most video or demo about DataSnap server method available only introduce socket based client server access communication. e.g.: TCP or HTTP protocol.
However, DataSnap was designed as a scalable data access solution that able to work with one, two, three or more tiers model.  All examples we see so far are suitable for 2 or 3 tiers design.  I can’t find any example talking about 1 tier or in-process design.
Indeed, it is very simple to work with in-process server method.  Most steps are similar to out-of-process server methods.

Define a Server Method

Define a well known EchoString() and a Sum() server method:

unit MyServerMethod;
interface
uses Classes, DBXCommon;
type
  {$MethodInfo On}
  TMyServerMethod = class(TPersistent)
  public
    function EchoString(Value: string): string;
    function Sum(const a, b: integer): integer; 
  end;
  {$MethodInfo Off}

implementation
function TMyServerMethod.EchoString(Value: string): string;
begin
  Result := Value;
end;

function TMyServerMethod.Sum(const a, b: integer): integer;
begin
  Result := a + b;
end;
end.

Define a DataModule to access the server method

Drop a TDSServer and TDSServerClass as usual to the data module.  Define a OnGetClass event to TDSServerClass instance.  Please note that you don’t need to drop any transport components like TDSTCPServerTransport or TDSHTTPServer as we only want to consume the server method for in-process only.
object MyServerMethodDataModule1: TMyServerMethodDataModule
  OldCreateOrder = False
  Height = 293
  Width = 419
  object DSServer1: TDSServer
    AutoStart = True
    HideDSAdmin = False
    Left = 64
    Top = 40
  end
  object DSServerClass1: TDSServerClass
    OnGetClass = DSServerClass1GetClass
    Server = DSServer1
    LifeCycle = ‘Server’
    Left = 64
    Top = 112
  end
end


unit MyServerMethodDataModule;
uses MyServerMethod;
procedure TMyServerMethodDataModule.DSServerClass1GetClass(DSServerClass: TDSServerClass;
    var PersistentClass: TPersistentClass);
begin
  PersistentClass := TMyServerMethod;
end;

Generate Server Method Client Classes

It is not easy to generate the server method client classes design for in-process server.  You may try any methods you are familiar with to hook up your server method to TCP or HTTP transport service, start the service and attempt to generate the client class by any means.
//
// Created by the DataSnap proxy generator.
//

unit DataSnapProxyClient;
interface
uses DBXCommon, DBXJSON, Classes, SysUtils, DB, SqlExpr, DBXDBReaders;
type
  TMyServerMethodClient = class
  private
    FDBXConnection: TDBXConnection;
    FInstanceOwner: Boolean;
    FEchoStringCommand: TDBXCommand;
  public
    constructor Create(ADBXConnection: TDBXConnection); overload;
    constructor Create(ADBXConnection: TDBXConnection; AInstanceOwner: Boolean); overload;
    destructor Destroy; override;
    function EchoString(Value: string): string;
    function Sum(const a, b: integer): integer;
  end;

implementation
function TMyServerMethodClient.EchoString(Value: string): string;
begin
  if FEchoStringCommand = nil then
  begin
    FEchoStringCommand := FDBXConnection.CreateCommand;
    FEchoStringCommand.CommandType := TDBXCommandTypes.DSServerMethod;
    FEchoStringCommand.Text := ‘TMyServerMethod.EchoString’;
    FEchoStringCommand.Prepare;
  end;
  FEchoStringCommand.Parameters[0].Value.SetWideString(Value);
  FEchoStringCommand.ExecuteUpdate;
  Result := FEchoStringCommand.Parameters[1].Value.GetWideString;
end;

function TMyServerMethodClient.Sum(a: Integer; b: Integer): Integer;
begin
  if FSumCommand = nil then
  begin
    FSumCommand := FDBXConnection.CreateCommand;
    FSumCommand.CommandType := TDBXCommandTypes.DSServerMethod;
    FSumCommand.Text := ‘TMyServerMethod.Sum’;
    FSumCommand.Prepare;
  end;
  FSumCommand.Parameters[0].Value.SetInt32(a);
  FSumCommand.Parameters[1].Value.SetInt32(b);
  FSumCommand.ExecuteUpdate;
  Result := FSumCommand.Parameters[2].Value.GetInt32;
end;

constructor TMyServerMethodClient.Create(ADBXConnection: TDBXConnection);
begin
  inherited Create;
  if ADBXConnection = nil then
    raise EInvalidOperation.Create(‘Connection cannot be nil.  Make sure the connection has been opened.’);
  FDBXConnection := ADBXConnection;
  FInstanceOwner := True;
end;

constructor TMyServerMethodClient.Create(ADBXConnection: TDBXConnection; AInstanceOwner: Boolean);
begin
  inherited Create;
  if ADBXConnection = nil then
    raise EInvalidOperation.Create(‘Connection cannot be nil.  Make sure the connection has been opened.’);
  FDBXConnection := ADBXConnection;
  FInstanceOwner := AInstanceOwner;
end;

destructor TMyServerMethodClient.Destroy;
begin
  FreeAndNil(FEchoStringCommand);
  inherited;
end;

end.

Invoke the server method via in-process

You may see from the following code that there is no different to access the server method for in-process and out-of-process design.
First, you create an instant of datasnap server.  This will register the DSServer to the TDBXDriverRegistry.  e.g. DSServer1 in this case.
You may then use TSQLConnection with DSServer1 as driver name instead of “DataSnap” that require socket connection to initiate in-process communication invoking the server method.
var o: TMyServerMethodDataModule;
    Q: TSQLConnection;
    c: TMyServerMethodClient;
begin
  o := TMyServerMethodDataModule.Create(Self);   Q := TSQLConnection.Create(Self);
  try
    Q.DriverName := ‘DSServer1’;     Q.LoginPrompt := False;
    Q.Open;

    c := TMyServerMethodClient.Create(Q.DBXConnection);
    try
      ShowMessage(c.EchoString(‘Hello’));
    finally
      c.Free;
    end;

  finally
    o.Free;
    Q.Free;
  end;
end;

Troubleshoot: Encounter Memory Leak after consume the in-process server methods

This happens in Delphi 2010 build 14.0.3513.24210.  It may have fixed in future release.  You may check QC#78696 for latest status.  Please note that you need to add “ReportMemoryLeaksOnShutdown := True;” in the code to show the leak report.

The memory leaks has no relation with in-process server methods.  It should be a problem in class TDSServerConnection where a property ServerConnectionHandler doesn’t free after consume.
Here is a fix for the problem:
unit DSServer.QC78696;
interface
implementation
uses SysUtils,
     DBXCommon, DSServer, DSCommonServer, DBXMessageHandlerCommon, DBXSqlScanner,
     DBXTransport,
     CodeRedirect;

type
  TDSServerConnectionHandlerAccess = class(TDBXConnectionHandler)
    FConProperties: TDBXProperties;
    FConHandle: Integer;
    FServer: TDSCustomServer;
    FDatabaseConnectionHandler: TObject;
    FHasServerConnection: Boolean;
    FInstanceProvider: TDSHashtableInstanceProvider;
    FCommandHandlers: TDBXCommandHandlerArray;
    FLastCommandHandler: Integer;
    FNextHandler: TDBXConnectionHandler;
    FErrorMessage: TDBXErrorMessage;
    FScanner: TDBXSqlScanner;
    FDbxConnection: TDBXConnection;
    FTransport: TDSServerTransport;
    FChannel: TDbxChannel;
    FCreateInstanceEventObject: TDSCreateInstanceEventObject;
    FDestroyInstanceEventObject: TDSDestroyInstanceEventObject;
    FPrepareEventObject: TDSPrepareEventObject;
    FConnectEventObject: TDSConnectEventObject;
    FErrorEventObject: TDSErrorEventObject;
    FServerCon: TDSServerConnection;
  end;

  TDSServerConnectionPatch = class(TDSServerConnection)
  public
    destructor Destroy; override;
  end;

  TDSServerDriverPatch = class(TDSServerDriver)
  protected
    function CreateConnectionPatch(ConnectionBuilder: TDBXConnectionBuilder): TDBXConnection;
  end;

destructor TDSServerConnectionPatch.Destroy;
var o: TDSServerConnectionHandlerAccess;
begin
  inherited Destroy;
  o := TDSServerConnectionHandlerAccess(ServerConnectionHandler);
  if o.FServerCon = Self then begin
    o.FServerCon := nil;
    ServerConnectionHandler.Free;
  end;
end;

function TDSServerDriverPatch.CreateConnectionPatch(
  ConnectionBuilder: TDBXConnectionBuilder): TDBXConnection;
begin
  Result := TDSServerConnectionPatch.Create(ConnectionBuilder);
end;

var QC78696: TCodeRedirect;
initialization
  QC78696 := TCodeRedirect.Create(@TDSServerDriverPatch.CreateConnection, @TDSServerDriverPatch.CreateConnectionPatch);
finalization
  QC78696.Free;
end.

Troubleshoot: Encounter “Invalid command handle” when consume more than one server method at runtime for in-process application

This happens in Delphi 2010 build 14.0.3513.24210.  It may have fixed in future release.  You may check QC#78698 for latest status.
To replay this problem, you may consume the server method as:
    c := TMyServerMethodClient.Create(Q.DBXConnection);
    try
      ShowMessage(c.EchoString(‘Hello’));
      ShowMessage(IntToStr(c.Sum(100, 200)));
    finally
      c.Free;
    end;

or this:
    c := TMyServerMethodClient.Create(Q.DBXConnection);
    try
      ShowMessage(c.EchoString(‘Hello’));
      ShowMessage(IntToStr(c.Sum(100, 200)));
      ShowMessage(c.EchoString(‘Hello’));
    finally
      c.Free;
    end;

Here is a fix for the problem
unit DSServer.QC78698;
interface
implementation
uses SysUtils, Classes,
     DBXCommon, DBXMessageHandlerCommon, DSCommonServer, DSServer,
     CodeRedirect;

type
  TDSServerCommandAccess = class(TDBXCommand)
  private
    FConHandler: TDSServerConnectionHandler;
    FServerCon: TDSServerConnection;
    FRowsAffected: Int64;
    FServerParameterList: TDBXParameterList;
  end;

  TDSServerCommandPatch = class(TDSServerCommand)
  private
    FCommandHandle: integer;
    function Accessor: TDSServerCommandAccess;
  private
    procedure ExecutePatch;
  protected
    procedure DerivedClose; override;
    function DerivedExecuteQuery: TDBXReader; override;
    procedure DerivedExecuteUpdate; override;
    function DerivedGetNextReader: TDBXReader; override;
    procedure DerivedPrepare; override;
  end;

  TDSServerConnectionPatch = class(TDSServerConnection)
  public
    function CreateCommand: TDBXCommand; override;
  end;

  TDSServerDriverPatch = class(TDSServerDriver)
  private
    function CreateServerCommandPatch(DbxContext: TDBXContext; Connection:
        TDBXConnection; MorphicCommand: TDBXCommand): TDBXCommand;
  public
    constructor Create(DBXDriverDef: TDBXDriverDef); override;
  end;

constructor TDSServerDriverPatch.Create(DBXDriverDef: TDBXDriverDef);
begin
  FCommandFactories := TStringList.Create;
  rpr;
  InitDriverProperties(TDBXProperties.Create);
  // ” makes this the default command factory.
  //
  AddCommandFactory(”, CreateServerCommandPatch);
end;

function TDSServerDriverPatch.CreateServerCommandPatch(DbxContext: TDBXContext;
    Connection: TDBXConnection; MorphicCommand: TDBXCommand): TDBXCommand;
var
  ServerConnection: TDSServerConnection;
begin
  ServerConnection := Connection as TDSServerConnection;
  Result := TDSServerCommandPatch.Create(DbxContext, ServerConnection, TDSServerHelp.GetServerConnectionHandler(ServerConnection));
end;

function TDSServerCommandPatch.Accessor: TDSServerCommandAccess;
begin
  Result := TDSServerCommandAccess(Self);
end;

procedure TDSServerCommandPatch.DerivedClose;
var
  Message: TDBXCommandCloseMessage;
begin
  Message := Accessor.FServerCon.CommandCloseMessage;
  Message.CommandHandle := FCommandHandle;
  Message.HandleMessage(Accessor.FConHandler);
end;

function TDSServerCommandPatch.DerivedExecuteQuery: TDBXReader;
var
  List: TDBXParameterList;
  Parameter: TDBXParameter;
  Reader: TDBXReader;
begin
  ExecutePatch;
  List := Parameters;
  if (List <> nil) and (List.Count > 0) then
  begin
    Parameter := List.Parameter[List.Count – 1];
    if Parameter.DataType = TDBXDataTypes.TableType then
    begin
      Reader := Parameter.Value.GetDBXReader;
      Parameter.Value.SetNull;
      Exit(Reader);
    end;
  end;
  Result := nil;
end;

procedure TDSServerCommandPatch.DerivedExecuteUpdate;
begin
  ExecutePatch;
end;

function TDSServerCommandPatch.DerivedGetNextReader: TDBXReader;
var
  Message: TDBXNextResultMessage;
begin
  Message := Accessor.FServerCon.NextResultMessage;
  Message.CommandHandle := FCommandHandle;
  Message.HandleMessage(Accessor.FConHandler);
  Result := Message.NextResult;
end;

procedure TDSServerCommandPatch.DerivedPrepare;
begin
  inherited;
  FCommandHandle := Accessor.FServerCon.PrepareMessage.CommandHandle;
end;

procedure TDSServerCommandPatch.ExecutePatch;
var
  Count: Integer;
  Ordinal: Integer;
  Params: TDBXParameterList;
  CommandParams: TDBXParameterList;
  Message: TDBXExecuteMessage;
begin
  Message := Accessor.FServerCon.ExecuteMessage;
  if not IsPrepared then
    Prepare;
  for ordinal := 0 to Parameters.Count – 1 do
    Accessor.FServerParameterList.Parameter[Ordinal].Value.SetValue(Parameters.Parameter[Ordinal].Value);
  Message.Command := Text;
  Message.CommandType := CommandType;
  Message.CommandHandle := FCommandHandle;
  Message.Parameters := Parameters;
  Message.HandleMessage(Accessor.FConHandler);
  Params := Message.Parameters;
  CommandParams := Parameters;
  if Params <> nil then
  begin
    Count := Params.Count;
    if Count > 0 then
      for ordinal := 0 to Count – 1 do
      begin
        CommandParams.Parameter[Ordinal].Value.SetValue(Params.Parameter[Ordinal].Value);
        Params.Parameter[Ordinal].Value.SetNull;
      end;
  end;
  Accessor.FRowsAffected := Message.RowsAffected;
end;

function TDSServerConnectionPatch.CreateCommand: TDBXCommand;
var
  Command: TDSServerCommand;
begin
  Command := TDSServerCommandPatch.Create(FDbxContext, self, ServerConnectionHandler);
  Result := Command;
end;

var QC78698: TCodeRedirect;
initialization
  QC78698 := TCodeRedirect.Create(@TDSServerConnection.CreateCommand, @TDSServerConnectionPatch.CreateCommand);
finalization
  QC78698.Free;
end.

Reference:

  1. QC#78696: Memory Leak in TDSServerConnection for in-process connection
  2. QC#78698: Encounter “Invalid command handle” when consume more than one server method at runtime for in-process application

Read More

Read More

Configure Windows 7 IIS7 for ISAPI DLL

Windows 7 IIS7 require some configurations to get ISAPI DLL works.  It is not that straight forward compare to IIS 5.

Install IIS 7

  1. Go to Control Panel | Programs and Features | Turn on Windows features on or off (require privilege mode).
  2. Check “Internet Information Services and make sure “ISAPI Extensions” and “ISAPI Filters” is checked as well.
  3. Click OK button to start installation.

After finish install IIS 7, open your favorite web browser and enter URL http://localhost/ to make sure the IIS is working and running.  You might need to check your firewall setting and add exception for port 80 TCP traffic if necessary.

Configure for ISAPI DLL

Add Virtual Directory

First, you may need to add a virtual directory to host your ISAPI DLL:

  1. Open Internet Information Service Manager (require privilege mode)
  2. Right click on “Default Web Site” node and click “Add Virtual Directory” of popup menu:

Enter “Alias” and “Physical Path” of the virtual directory:

Enable ISAPI for Virtual Directory

To enable ISAPI for the virtual directory:

  1. Select the virtual directory node (e.g.: “ISAPI” in this example). 
  2. Double click the “Handler Mappings” icon. 
  3. Click “Edit Feature Permissions…” in Actions panel
  4. A “Edit Feature Permission” dialog prompt out
  5. Check “Execute”.
  6. Click OK button to commit the changes.

Enable Directory Browsing for Virtual Directory

This is optional but is convenient.  To enable Directory Browsing for a virtual directory:

  1. Select the virtual directory node (e.g.: “ISAPI” in this example). 
  2. Double click the “Directory Browsing” icon.
  3. Click “Enable” in Actions panel.

Edit Anonymous Authentication Credentials

  1. Select the virtual directory node.
  2. Double click the “Authentication” icon.
  3. Click to select “Anonymous Authentication” item.
  4. Click “Edit…” in Actions panel.
  5. A dialog will prompt out.
  6. Checked “Application pool identity” and press OK button to commit changes.

Enable ISAPI modules

  1. Click on the root node.
  2. Double click the “ISAPI and CGI Restrictions” icon.
  3. Click ”Edit Feature Setting …” in Actions panel.
  4. Check “Allow unspecified ISAPI modules” option.  This option allow any ISAPI dll to be executed under IIS.  If you don’t use this option, you will need to specify a list of ISAPI DLLs explicitly.

Edit Permission for Virtual Directory

  1. Select the virtual directory node (e.g.: “ISAPI” in this example). 
  2. Right click on the node and click “Edit Permission” of popup menu.
  3. A Properties dialog prompt out.
  4. Switch to “Security” page
  5. Click Edit button to show Permission dialog.
  6. Add “IIS_IUSRS” into the permission list.

Enable 32 bits ISAPI DLL on IIS 7 x64

This is only require if you are using IIS7 x64 and would like to run 32 bits ISAPI DLL on the IIS.  If your ISAPI DLL and IIS7 is both x86 or both x64, you may skip this step.

  1. Click “Application Pools” node.
  2. Click “DefaultAppPool” item
  3. Click “Advanced Settings …” from Actions panel.
  4. A “Advanced Settings” dialog prompt out
  5. Set “Enable 32-bits Applications” to True
  6. Click OK button to commit changes

If you didn’t enable this options for 32 bits applications, you may encounter the following errors when execute the ISAPI from web browser:

HTTP Error 500.0 – Internal Server Error

The page cannot be displayed because an internal server error has occurred.

HTTP Error 500.0 – Internal Server Error
Module    IsapiModule
Notification    ExecuteRequestHandler
Handler    ISAPI-dll
Error Code    0x800700c1
Requested URL    http://localhost:80/isapi/isapi.dll
Physical Path    C:\isapi\isapi.dll
Logon Method    Anonymous
Logon User    Anonymous 

You may now deploy your ISAPI DLLs into the virtual directory and start execute the library from web browser.

DataSnap and ISAPI DLL

You may create Delphi DataSnap ISAPP DLL library and deploy on IIS.  From time to time, you may encounter compilation error during development or deployment time if you have consume the ISAPI DLL.  This is because the ISAPI DLL invoked will cache in the application pool.  You are not allow to overwrite the ISAPI DLL while it’s being cached.

To overcome this problem, you need to perform Recycle operation:

  1. Click “Application Pools” node.
  2. Right click on “DefaultAppPool” item and click “Recycle…” item.

Deploying as ISAPI DLL is encourage during deployment stage as IIS will cache the ISAPI DLL for performance consideration.

However, the caching might not feasible during development stage as recycling need to be performed while overwrite the ISAPI DLL either by frequent compiling or overwriting.  You may consider compile the server modules as CGI application in development time.  Each invocation of CGI is a separate OS process and won’t be cache by IIS application pool.

Install CGI on IIS

  1. Go to Control Panel | Programs and Features | Turn on Windows features on or off (require privilege mode).
  2. Check “Internet Information Services and make sure “CGI” is checked.
  3. Click OK button to start installation.

Enable CGI Module

  1. Click on the root node.
  2. Double click the “ISAPI and CGI Restrictions” icon.
  3. Click ”Edit Feature Setting …” in Actions panel.
  4. Check “Allow unspecified CGI modules” option.

Consume DataSnap Server Methods via URL

The DataSnap server methods are using JSON as data stream via REST protocol.  For example, a simple EchoString server method defined as:

type
  {$MethodInfo On}
  TMyServerMethod = class(TPersistent)
  public
    function EchoString(Value: string): string;
  end;
  {$MethodInfo Off}

implementation

function TMyServerMethod.EchoString(Value: string): string;
begin
  Result := Value;
end;

To access this method compiled in ISAPI DLL via URL, the URL is something like

http://localhost/datasnap/MyISAPI.DLL/datasnap/rest/TMyServerMethod/EchoString/Hello

and the response text will be:

{“result”:[“Hello”]}

Likewise, a CGI URL is

http://localhost/datasnap/MyCGI.exe/datasnap/rest/TMyServerMethod/EchoString/Hello

 

Reference:

  1. DataSnap 2010 HTTP support with an ISAPI dll; Author: Tierney, Jim

Read More

Read More

Delphi and RAD Studio Essentials on Nov 16-18, 2009 in Sweden

From Monday, Nov 16nd until Wednesday Nov 184th, 2009, I'll also be in Stockholm, Sweden, for a 2.5-day RAD Studio 2010 Essentials seminar (spoken in English) covering Delphi topics like the IDE and Language enhancements (Generics, Anonymous Methods, Touch/Gesture), UNICODE, VCL Database Development (DBX4 and DataSnap), Web Development (IntraWeb, ASP.NET and AJAX), and XML, SOAP and Web Services. The .NET topics will be covered using Delphi Prism 2010.
Read More

Delphi 2010 Masterclass on Nov 2-4, 2009 in Helsinki, Finland

From Monday, Nov 2nd until Wednesday Nov 4th, 2009, I'll be in Helsinki, Finland, for a 2.5-day RAD Studio 2010 Essentials seminar (spoken in English) covering Delphi topics like the IDE and Language enhancements (Generics, Anonymous Methods, Touch/Gesture), UNICODE, VCL Database Development (DBX4 and DataSnap), Web Development (IntraWeb, ASP.NET and AJAX), and XML, SOAP and Web Services. The .NET topics will be covered using Delphi Prism 2010.
Read More

Delphi Community Edition

I’ve just read Jolyon Smith’s post on a Delphi Community Edition. All good stuff, and something I have been clamouring for, for quite a while. The thing is, I don’t think it will happen. Embarcadero are in the business of making money, not giving things away. Incidentally, I think they’d probably make more money in the long run if they did give some things away, but playing the long game can be very difficult. So here’s a suggestion which is a mix of both the long and short game. I’ve suggested something like this before. Here it is slightly refined. Create a Delphi Community Edition/Turbo Editon, call it what you want, but the most important thing is it’s price. Yep, it’s free, or perhaps even better make it $99. (For some reason people think free stuff equates to poor quality) Do all the things Jolyon suggested. Digital Watermarking for example. Create an app-store on the Embarcadero website, that can accept apps/components from the Community Edition. Embarcadero get a percentage of all sales. (and perhaps they have some utility to remove the watermark when sold through the store) So users get their cheap edition, but they also get a reason to use it. Why do people put them selves through the hassle of learning Objective-C and Cocoa for the IPhone when it’s arguably easier to develop for Windows Mobile? There’s money in it, that’s why! Embarcadero get their $99 for the IDE and their 5-10% cut of sales, but more importantly they get a new user. Someone who would have used C# Express Edition, but saw an opportunity. Apple do have the advantage of a closed system, so perhaps it wouldn’t work for Delphi, but Embarcadero would not lose a thing by trying. The developers who currently would buy Delphi Professional, are not the target, and if done right, would still want the Professional version. Your target is that new developer about to download C# Express.
Read More