Web 2.0 programming with Object Pascal (Part 2)

As I promised in my last article, here I'll show you how to add CRUD (Create, Read, Update and Delete) operations to the sample application.The first step is to add a toolbar to the Grid, with three buttons, btnAdd, btnEdit and btnDelete in charge of Inserting, Updating and Deleting data. Also I'll create a new popup form where the user will work with that data.The new GridInstead of overwriting the files used in the last example, I recommend to create a new directory called samples2, containing all the files showed in this article.NOTE: the file grid2.html, has the same contents of grid1.html, so just copy it from the last example and paste in the new directory, and don't forget to rename to grid2.html. After that, you'll have to rename the div id "grid1" by "grid2".This is the code for grid2.js:Ext.onReady(function(){ var dataStore = new Ext.data.JsonStore({ //url: '/samples/customerslist.json', url: '/cgi-bin/customerslist', root: 'rows', method: 'GET', fields: [ {name: 'id', type: 'int'}, {name: 'firstname', type: 'string'}, {name: 'lastname', type: 'string'}, {name: 'age', type: 'int'}, {name: 'phone', type: 'string'} ] }); var btnAdd = new Ext.Toolbar.Button({ text: 'Add', handler: function(){ var win = new MyPeopleWindow(); // to refresh the grid after Insert win.afterPost = function(){ dataStore.load(); }; win.show(); } }); var btnEdit = new Ext.Toolbar.Button({ text: 'Edit', handler: function(){ var win = new MyPeopleWindow(selRecordStore.id); // to refresh the grid after Update win.afterPost = function(){ dataStore.load(); }; win.show(); } }); var btnDelete = new Ext.Toolbar.Button({ text: 'Delete', handler: function(){ Ext.Msg.confirm( 'Delete customer?', 'Are you sure to delete this customer?', function(btn){ if(btn == 'yes'){ var conn = new Ext.data.Connection(); conn.request({ url: '/cgi-bin/customerslist', method: 'POST', params: {"delete_person": selRecordStore.id}, success: function(response, options) { // refresh the grid after Delete JSonData = Ext.util.JSON.decode(response.responseText); if(JSonData.success) dataStore.load(); else Ext.Msg.alert('Status', JSonData.failure); }, failure: function(response, options) { Ext.Msg.alert('Status', 'An error ocurred while trying to delete this customer.'); } }); } } ); } }); var myGrid1 = new Ext.grid.GridPanel({ id: 'customerslist', store: dataStore, columns: [ {header: "First Name", width: 100, dataIndex: "firstname", sortable: true}, {header: "Last Name", width: 100, dataIndex: "lastname", sortable: true}, {header: "Age", width: 100, dataIndex: "age", sortable: true}, {header: "Phone", width: 100, dataIndex: "phone", sortable: true} ], sm: new Ext.grid.RowSelectionModel({ singleSelect: true, listeners: { rowselect: function(smObj, rowIndex, record){ selRecordStore = record; } } }), tbar: [ btnAdd, btnEdit, btnDelete ], autoLoad: false, stripeRows: true, height: 200, width: 500 }); dataStore.load(); myGrid1.render('grid2');});Now, the editor form:MyPeopleForm = Ext.extend(Ext.FormPanel, { initComponent: function(){ Ext.apply(this, { border:false, labelWidth: 80, defaults: { xtype:'textfield', width: 150 }, items:[ {xtype:'numberfield',fieldLabel:'Id',name:'id'}, {fieldLabel:'First Name',name:'firstname'}, {fieldLabel:'Last Name',name:'lastname'}, {xtype:'numberfield',fieldLabel:'Age',name:'age'}, {fieldLabel:'Phone',name:'phone'} ] }); MyPeopleForm.superclass.initComponent.call(this, arguments); }, setId: function(idPerson) { this.load( { method: 'POST', url: '/cgi-bin/customerslist', params: {'idperson': idPerson} } ); } }); MyPeopleWindow = Ext.extend(Ext.Window, { constructor: function(idPerson){ MyPeopleWindow.superclass.constructor.call(this, this.config); // if idPerson is not null, then edit record // otherwise it's a new record if(idPerson != null) this.form.setId(idPerson); }, afterPost: function(){ this.fireEvent('afterPost', this); }, initComponent: function(){ Ext.apply(this, { title: 'Loading data into a form', bodyStyle: 'padding:10px;background-color:#fff;', width:300, height:270, closeAction: 'close', items: [ this.form = new MyPeopleForm() ], buttons: [ { text:'Save', scope: this, handler: function(){ this.form.getForm().submit({ scope: this, url: '/cgi-bin/customerslist', method: 'POST', // here I add the param save_person // to let the cgi program decide // a course of action (save person data in this case). params: {'save_person':'true'}, success: function(form, action){ // on success I just close the form this.afterPost(); this.close(); }, failure: function(form, action){ Ext.Msg.alert("Error","There was an error processing your request\n" + action.result.message); } }); } }, { text:'Cancel', handler: function(){this.close();}, // important!, without "scope: this" // calling this.close() will try to close the Button!, // and we need to close the Window, NOT the button. scope: this } ] }); MyPeopleWindow.superclass.initComponent.call(this, arguments); } });That's all for the UI part. Now let's create our new customerslist.pp file, containing all the data required for the CGI application.program cgiproject1;{$mode objfpc}{$H+}uses Classes,SysUtils, httpDefs,custcgi, // needed for creating CGI applications fpjson, // needed for dealing with JSon data Db, SqlDb, ibconnection; // needed for connecting to Firebird/Interbase;Type TCGIApp = Class(TCustomCGIApplication) Private FConn: TSqlConnection; FQuery: TSqlQuery; FTransaction: TSqlTransaction; procedure ConnectToDataBase; function GetCustomersList: string; function GetCustomer(AIdPerson: string): string; procedure FillJSONObject(AJson: TJsonObject); function SavePerson(ARequest: TRequest): string; function DeletePerson(ARequest: TRequest): string; Public Procedure HandleRequest(ARequest : Trequest; AResponse : TResponse); override; end;procedure TCGIApp.ConnectToDataBase;begin FConn := TIBConnection.Create(nil); FQuery := TSqlQuery.Create(nil); FTransaction := TSqlTransaction.Create(nil); with FConn do begin DatabaseName := 'TARJETA'; UserName := 'SYSDBA'; Password := 'masterkey'; HostName := '192.168.1.254'; Connected := True; Transaction := FTransaction; FQuery.Database := FConn; end;end;procedure TCGIApp.FillJSONObject(AJson: TJsonObject);begin AJson.Add('id', TJsonIntegerNumber.Create(FQuery.FieldByName('IdCliente').AsInteger)); AJson.Add('firstname', TJsonString.Create(FQuery.FieldByName('Apellido').AsString)); AJson.Add('lastname', TJsonString.Create(FQuery.FieldByName('Nombres').AsString)); AJson.Add('age', TJSONIntegerNumber.Create(FQuery.FieldByName('IdCliente').AsInteger)); AJson.Add('phone', TJsonString.Create(FQuery.FieldByName('TelFijo').AsString));end;function TCGIApp.GetCustomersList: string;var lPerson: TJSONObject; lJson: TJSONObject; lJsonArray: TJSONArray;begin (* Query the database *) FQuery.Close; FQuery.Sql.Text := 'select * from clientes'; FQuery.Open; FQuery.First; lJsonArray := TJSONArray.Create; lJson := TJSONObject.Create; try while not FQuery.Eof do begin lPerson := TJSONObject.Create; fillJsonObject(lPerson); FQuery.Next; (* Fill the array *) lJsonArray.Add(lPerson); end; (* Add the array to rows property *) lJson.Add('rows', lJsonArray); Result := lJson.AsJSON; finally lJson.Free; end;end;function TCGIApp.GetCustomer(AIdPerson: string): string;var lPerson: TJSONObject; lJson: TJSONObject;begin (* Query the database *) FQuery.Close; FQuery.Sql.Text := 'select * from clientes where IdCliente=' + AIdPerson; FQuery.Open; FQuery.First; lJson := TJSONObject.Create; try lPerson := TJSONObject.Create; fillJsonObject(lPerson); (* Add the array to rows property *) lJson.Add('success', 'true'); lJson.Add('data', lPerson); Result := lJson.AsJSON; finally lJson.Free; end;end;function TCGIApp.SavePerson(ARequest: TRequest): string;var lId: string; lFirstName: string; lLastName: string; lPhone: string; lSql: string;begin lId := ARequest.ContentFields.Values['id']; lFirstName := ARequest.ContentFields.Values['firstname']; lLastName := ARequest.ContentFields.Values['lastname']; lPhone := ARequest.ContentFields.Values['phone']; if lId '' then lSql := 'update clientes set ' + 'nombres = ''' + lLastName + ''', ' + 'apellido = ''' + lFirstName + ''', ' + 'telfijo = ''' + lPhone + ''' where idcliente=' + lId else begin lSql := 'insert into clientes(IdCliente, Nombres, Apellido, TelFijo) ' + 'values(Gen_Id(SeqClientes, 1),''' + lFirstName + ''', ''' + lLastName + ''', ''' + lPhone + ''')'; end; try FQuery.Sql.Text := lSql; FConn.Transaction.StartTransaction; FQuery.ExecSql; FConn.Transaction.Commit; Result := '{''success'': ''true''}'; except on E: Exception do Result := '{''message'': "' + E.message + '"}'; end;end;function TCGIApp.DeletePerson(ARequest: TRequest): string;var lId: string;begin lId := ARequest.ContentFields.Values['delete_person']; try FQuery.Sql.Text := 'delete from clientes where idcliente=' + lId; FConn.Transaction.StartTransaction; FQuery.ExecSql; FConn.Transaction.Commit; Result := '{''success'': ''true''}'; except on E: Exception do Result := '{''failure'': ''Error deleting person.''}'; end;end;Procedure TCGIApp.HandleRequest(ARequest : TRequest; AResponse : TResponse);var lIdPerson: string;begin if ARequest.ContentFields.Values['delete_person'] '' then begin AResponse.Content := DeletePerson(ARequest); end else if ARequest.ContentFields.Values['save_person'] '' then begin AResponse.Content := SavePerson(ARequest); end else begin lIdPerson := ARequest.ContentFields.Values['idperson']; if lIdPerson '' then AResponse.Content := GetCustomer(lIdPerson) else AResponse.Content := GetCustomersList; end;end;begin With TCGIApp.Create(Nil) do try Initialize; ConnectToDatabase; Run; finally Free; end;end.To compile this file, I use a simple Bash script, that copies the compiled program to /var/www/cgi-bin directory, where my Apache2 is configured to host executable CGI programs.#!/bin/bashfpc -XX -Xs -b -v ./customerslist.ppcp ./customerslist /var/www/cgi-binIn this article, I showed how to create an HTML page containing a grid, with a toolbar that allow CRUD operations. After I published my last article, some of you told me that this was too much work for showing just a simple grid on a web page, and I agree, but when you start adding complexity to the application, the effort needed to add features is marginal, and the separation of concerns used here allows to use better ways to speed up the code creation. I mean, instead of using a simple text editor to create the ExtJs code as I did (well, VIM is not a simple editor), you could try the new ExtJs Designer, and for the Object Pascal part, it is very easy to replace it with higher level frameworks, like WebBroker or DataSnap.Some of you asked why I didn't use ExtPascal in this article. I didn't use it because I wanted to show all the parts involved in an ExtJs application (HTML, JS, CGI App), and ExtPascal creates the JavaScript code automatically hiding those internals to the programmer, I think it is very powerfull for programmers who already know how to work with plain ExtJs, and after this article, you already know how it works, so now I can write about ExtPascal!.Here are the files for this article.What's next?Before writing about ExtPascal, I'll show you how to replace the CGI part by a DataSnap Server application. See you in my next post!.
Read More

Random Thoughts on the Passing Scene #149

If you are a registered user of Delphi 2010, you can now download a digital copy of Marco Cantu’s new book, Delphi 2010 Handbook. (From Marco’s Site: “The book covers all the new features of Delphi 2010 for Win32, from Extended RTTI to new IDE features, from Windows 7 support to the improved DataSnap architecture. This is a brand new book, there is no overlapping material with the Delphi 2007 Handbook and Delphi 2009 Handbook (which you can consider buying along with this book in printed or electronic format).”) I think you all know by now how great Marco’s stuff is.  You can also order a hard-copy of the book as well from Marco’s site.  (Marco now is using CreateSpace as his publishing center.  The CreateSpace guys are actually here in our current building – I play basketball with a couple of the guys that work there….) Anders apparently can’t control himself and has put a few more items up for auction, including an autographed by Allen Bauer copy of Delphi 1. Michael Swindell sent me this link today:  The Secret Origin of Windows or as he called it “How Turbo Pascal Shaped Windows 1.0”.  Read and enjoy. Share This | Email this page to a friend
Read More

Time to say “Goodbye“

It‘s for me time to say goodbye ... don‘t worry, not to Delphi! I‘m a long time user of Rave Reports and a TeamNevrona member since 2002. Many know me from the newsgroups or met me on conferences and so on. During those years, I gave more than 50 trainings and a lot of reporting-sessions on conferences in Europe and the US about Rave Reports. I really love the power of RAVE Reports, but must admit that Rave has a high learning curve for developers who come from another report engines. In my trainings I tried to open „the door into Rave“ and I got a lot of positive feedback over the years from many attendees. All my reports in my applications use Rave and customers are happy with the performance and quality. But I noticed (like many others, I get every weeks many emails from Rave-users all over the world ) that it seems that Nevrona Design does not have the power to publish a stable Rave8 version or a BE-version for Delphi 2009 / 2010 which I can use in a production environment with complex reports ... (I don‘t talk about a new IDE, which they have announced in 2007, I mean only the support of Unicode and so on, the IDE is another story which I don‘t comment upon right now).It is unfortunate....I‘ll move on to a new reporting engine with my change to Delphi 2010. But believe me, it‘s not an easy way to take, however, I see no alternatives at the moment. A lot of developers must stay with Delphi 2007 or older because they need a stable Rave BEX with their additional components or customized sources, like me. More than two years we are anticipating the release of Rave 8, yet Nevrona Design has not published a stable version. I‘m in the situation to switch to Delphi 2010 with the really cool Datasnap-Framework and I need a stable reporting engine. I‘ve ordered FastReport 4.9 and have made my first steps in FR in the last weeks... I‘ll not update this blog in the future, only one new thread about my new blog will be coming in the next couple of weeks. The content of the new blog will be focused on Delphi & SQL-language & Database development (well, I‘m a database-junkie since over the last 20 years now). stay tuned...
Read More

April 12-13, 2010: Delphi / DataSnap Development Masterclass in UK

This 2-day Delphi 2010 / DataSnap 2010 masterclass in Upavon, Wiltshire, UK on April 12-13, 2010, is organised together with the UK Developers Group, featuring Bob Swart as presenter, and currently open for bookings (we have some registrations already and limited space, so don't wait too long to book your own place ;-)). The first day of the masterclass covers new features in Delphi since 2007, using Windows 7 (because some of the features require Windows 7), the second day is a fully hands-on day using DataSnap.
Read More

Delphi 2010 RTTI Contexts: how they work, and a usage note

Delphi 2010 includes extended support for RTTI, also known as run-time type info or reflection. Many design approaches that have previously only been possible with managed languages like C# and Java because of the code annotation and introspection they required should now be possible in the Delphi world. Something somewhat interesting about how the RTTI works is its approach to object pools. Delphi isn't a garbage collected language, so users need to be careful to free objects when they're no longer needed, either explicitly, or by designing or using some kind of ownership scheme, such as that used by TComponent, where the Owner takes care of destruction. Type information usage scenarios don't mesh particularly well with a TComponent-style of ownership. Typically, when working with RTTI, you want to do some kind of search for interesting objects, do something with them, and then go on your way. That means that many objects may get allocated for inspection, but not actually be used. Managing those objects' lifetimes independently would be tedious, so a different approach is used: there is a single global RTTI object pool. While there is at least one RTTI context active in the application, this object pool keeps all its objects alive. When the last context goes out of scope, the objects get freed up. The pool management works by using a Delphi record that contains an interface reference. The first time any given RTTI context is used, it fills in this interface reference. It can't fill it in any later than first use, because Delphi records don't support default constructors, which besides have their own problems. For example, how do you handle exceptions in default constructors, in all the places they can occur? Allocating arrays, thread-local variables, global variables, global variables in packages, temporary objects created in expressions, etc. It can get ugly, and in C++ it sometimes does. So, this first use allocates an interface, called a pool token. It acts as a reference-counted handle to the global object pool. For so long as this interface is alive, the the global object pool should stay alive. Even if the RTTI context is copied around, Delphi's built-in interface handling logic, designed along COM principles, will ensure that the interface doesn't gets disposed of prematurely or get its reference count muddled up. And when an RTTI context goes out of scope, either by being a local variable in a function that is exited, or a field in an object that is freed, the reference count is reduced. When it hits zero, the pool is emptied. The biggest upside of this approach is that RTTI usage should feel reasonably cheap, conceptually speaking. Code need only declare a variable of the appropriate type, and start using it: procedure Foo; var ctx: TRttiContext; t: TRttiType; begin t := ctx.GetType(TypeInfo(Integer)); Writeln(t.Name); end; A downside, however, is that lazy initialization can create a gotcha. Imagine this scenario: Library A declares an RTTI context A.C User code B declares an RTTI context B.C B pulls some RTTI objects O out of B.C, in order to hand them to library A B.C goes out of scope Library A now tries to work with O, but discovers much to its surprise, that the objects have been prematurely disposed, even though A already has an RTTI context, A.C The problem is that A never used A.C, so it never allocated a pool token. When B.C used its context, the pool came into being, and objects O were assigned to it; but after B.C went out of scope, the objects got freed. The solution to this problem is for Library A, knowing that it has a long-lived RTTI context and it expects to communicate with third-party code which allocates objects from its own RTTI context and hands them back, it should ensure that the long-lived context's pool token is allocated. A trivial way to do this is like this: type TFooManager = class FCtx: TRttiContext; // ... constructor Create; // ... end; constructor TFooManager.Create; begin FCtx.GetType(TypeInfo(Integer)); // ... end; This will allocate only a bare minimum of RTTI objects, those needed to represent the type System.Integer, but more importantly, will ensure that FCtx has a pool token and will keep the global RTTI pool alive. In future releases of Delphi, the static method TRttiContext.Create will make sure that its return value has a pool token allocated; currently, it does not. TRttiContext.Create was originally defined to make the TRttiContext record feel more like a class for people unfamiliar with the idiom of using interfaces for automated deterministic lifetime management. The corresponding TRttiContext.Free method disposes of the internal pool token, and should remain the same.
Read More

The PDA Today

It’s a little off topic but I’ve been using it so much these days I thought it worth a mention.

As mentioned in another post, I’ve used PDA devices since the late 90’s starting out with the Psion 5 before I moved on to the very successful palm pilot, specifically the Palm Tungsten T. I switched to windows mobile with the purchase of the Palm Treo 700w a few years later and in the last year I have had an iPhone and a Blackberry and now back to an iPhone.

My favourite was the palm tungsten with the great screen and huge array of very useful applications. It never left my side. It wasn’t a phone at all but it was a very practical PDA device. I admit I’ve been struggling to find a replacement as genuinely practical as that little device since.

The Treo 700w running Windows Mobile I found frustrating. The tiny screen did not allow the lengths that I took the Tungsten to with, for example, running spreadsheets and viewing the calendar showing handy icons (in a third party application). And the little keyboard took ages to learn effectively. I admit to reverting to Palm’s hand writing characters as a preferred way of entering a reasonable amount of text.

When I got my first iPhone I was dismayed. I couldn’t even find a spreadsheet – one of my most useful tools. There was no copy and paste which was screamingly annoying and it just didn’t have the power and usefulness I was… well… used to. Then, surprisingly, over the next couple of months I grew to understand the iPhone. It wasn’t trying to be a PC and it did have it’s limitations, but accept that and you ended up with something, although not nearly as powerful as other systems I’ve used, was very practical all the same. The penny had dropped and I finally understood the iPhone.

The iPhone was supplied by the contract I was in at the time and when that was completed I handed that back.

The next contract supplied a Blackberry Bold 9000 as a standard device. Aha, I thought, back into the “blue suit” of devices. The Blackberry was the true workhorse. No time for nonsense like enjoyment. In my role controlling several development teams around the world and the fast push email processing I was getting emails all day and night. But the workhorse was solid and reliable … Once I had ironed out the bugs.

Turns out that the very limited memory for applications meant that every application had to be specifically closed down or the memory would soon fill up. Filling up the memory would slow the system down so much the device would be totally unusable until you were able to either reset the Blackberry, or wait long enough for each painful command until you got to a point where you could shut down enough apps for the system to work again – and even then sometimes it still needed a reset.

This also prevented loading other applications so I was left with a pretty boring, yet (if I remembered to close each app regularly) a pretty effective one.

One feature with the Blackberry that took me a while to get used to was that it didn’t have a touch screen. No stylus, no finger pointing, just a little trackball controlling a cross-hair cursor. I will taper this with the fact that I’m an experienced PDA user and was forever trying to “tap” a link rather than move the cross-hair to it and “select” the link. I never got used to that but passed it into a learning experience, like learning to drive an automatic without trying to depress the clutch; or learning to be a passenger without trying to use the brake pedal.

I did have a pretty major issue with it though. As I received so many emails that demanded my reply, I was annoyed to find that the Blackberry would “send me” every email I sent. In other words, not long after I fired off a quick reply I would receive another email. With so many emails coming in I would be forced to bring out the Blackberry again and check, only to find it was the email just sent and if I didn’t open it to read it the Blackberry would forever show up that I had xx unread emails. Not something that endeared me to it after a while. I could not find a way around that issue.

Recently I had an opportunity to purchase another phone so I looked at all the offerings.  Having a long and enjoyable relationship with Palm products I really wanted to get the Palm Pre but it’s still not available in Australia with no signs of it ever being released here.

I opted for the simplicity and sheer “nifty-ness” of the iPhone. The new 3Gs version was out here which answered a lot of my earlier issues. I am so far quite pleased with it. It’s taken me a while to learn to type on it (all my recent blogs have been written on the iPhone and transferred to the PC for spelling and formatting. It keeps insisting to replace Aus words with their American spelling so I have to keep an eye on it. World domination ain’t here yet people and we spell our words using S’s and there’s no such thing as a Zee, they’re Zed’s

But back to the iPhone and I find that the number and power of apps has increased dramatically in the last year. Yes there is even a spreadsheet now and clunky as it is, I can use copy and paste at last. I enjoy the wireless connection meaning the huge amount of data I paid up front for, hardly gets used at all as the danged thing keeps finding home, work, and free wireless connections to use.

Below, a few screen dumps of some of my more favourite iPhone apps.

Read More

Read More

More on Cloud Computing

A few posts ago I spoke on attending a Cloud Computing seminar put on by NetSuite. Since then I have looked into this a little more and hope to dispel some of the misconceptions of Cloud Computing.

The idea sounds great, and it’s certainly the buzzword of the week if we believe the hype.

“But hang on,” I hear some of you saying, “Cloud Computing is just having apps on the Internet and we’ve had that for years. Why should it be different just because someone wanted to get their PhD by coming up with a new name for old technology?” …and you’d be right in asking that, its a pretty legitimate question in my book.

Definition

So let’s define Cloud Computing precisely, even more than we did in my post of a few days ago. To be considered a true Cloud Computing system, an application must satisfy all of the following criteria:

  1. Application on the Internet. It must reside on the publically available Internet. Publically available does not mean anyone has access to your systems, you will still need to log in. Now there is such a thing as an Internal Cloud, and even a Private Cloud, but for the purposes of this paper I’m going to limit this to full public access applications. That is: externally hosted applications where you can use and store your information on the externally provided system (e.g. salesforce.com).
  2. Data on the Internet. The data you place into it must reside on the Internet, although obviously it must also be secure so that only you have access to your Data.
  3. No to little up-front costs. You are not purchasing software licences or additional hardware. Sometimes you may purchase consulting services to assist in converting your systems and data. In some extreme cases like perhaps a corporate wide accounting system, additional consulting and training may be necessary.
  4. Nothing is installed on your computer apart from a web browser and perhaps some browser additions like adobe PDF viewer.
  5. Costs are consumption based. In other words pricing is charged per hour; gigabyte; or hits per month. The less you use it the less it should cost you, and the reverse is also true.
  6. On-demand. The service should, in its minimum configuration, be able to be set up by the user for use that day. Of course in very large and complex corporate systems this may take planning and often highly specialised consulting services.
  7. Scalable. As far as the user is concerned they shouldn’t have to worry about infrastructure at all. They should be able to increase from megabytes to terabyte throughput without having to organise storage or backups, extra staff, servers, or any of the other hassles.

Advantages of Cloud Computing

This definition of Cloud Computing shows up a number of areas of cost savings, reduced hassles, and sometimes increased functionality over in-house systems. These include:

  • Costs can be avoided or deferred. In most cases, increasing both functionality and capacity should be totally transparent  process. No server purchases or increased IT management. The regular billing cycles of the Cloud Computing model allow businesses to accurately forecast their IT budget based on known consumption levels.
  • Increases a business ability to change. The on-demand model inherent in Cloud Computing enables organisations to increase or decrease computing capacity without hardware, or IT management concerns resulting in no lag time while IT management orders the new hardware; installs the appropriate drivers; sets up the new cabling; tests the newly raided disks; increases the tape backup facilities; increases rack space; sets up active directory; and all the project work that comes with installing new capacity. The ability to then, just as quickly reduce that capacity without worrying about costly hardware lying idle is one compelling reason for the Cloud Computing model.
  • Faster ROI. The Cloud Computing model allows businesses to pay for only the resources it consumes and only as it consumes them. Businesses are able to see a faster return on their IT investment because there is no need to wait for the resources to be procured, provisioned, and managed.
  • Increased mobile workforce access. Your users will be able to access required business functionality without the overhead of network hardware, VPN software, and network management. Users will also be able to access their applications and data while at home, on the road, or in the office from any computer. Some Cloud Computing vendors (some through third party software vendors) allow access via mobile devices like the Blackberry or iPhone.
  • Additional expert IT staff. Highly-skilled professionals are available through the Cloud Computing SaaS (Software as a Service) company to operate and maintain their (your) service.
  • Increases business continuity by providing inexpensive disaster recovery options:. In some cases, cloud computing can be utilized as a viable disaster recovery option—especially for storage—thereby increasing business continuity.

On the definition of Cloud Computing given in the last section, businesses and individuals should never be concerned about backups, infrastructure, server space, firewalls, upgrades, storage, daily security patches or any of the plethora of other things that are nothing to do with running their business.

In summary, the ROI (return on investment) should be greater without the large up-front costs of infrastructure; software purchase and installation; and the manpower costs to manage it all. The infrastructure changes need no longer be a concern of the business allowing for both business growth and business reduction to occur without the penalty of either time lag and up front purchases or costly redundant and idle hardware. Also, business can plan their financial outlay with known, regular payments rather than up front large purchases.

It is also the nature of some businesses that sometimes additional infrastructure and computing power may be needed for short periods of time. Cloud Computing will be able to accommodate these bursts without the huge infrastructure and set up time costs required for something that will not be needed after the task has been completed.

Pitfalls

All of this of course sounds like a utopian situation but to the consumer there are pitfalls that they should take into consideration before embarking down that path. These can be summarized as the following:

  • Be sure of the SLAs. It is up to the consumer to be happy that the SLA will cover their requirements and that they can survive any unforeseen downtime or lack of service. Of course this downtime may also occur even when they have the best server rooms and the best of staff so normal disaster management plans should always be in place anyway.
  • Consider the SLA that you provide to your customers. Will holding their data off-line hold up if a customer questions an SLA they hold with you?  This is often overlooked in the rush for the savings and ease of adopting Cloud Computing.
  • Vendor lock-in. Will the provider allow you to access your data and how soon can they get it to you if you ask for it? Will they work with another provider to transfer your data if you ask for it? How easy will it be to transfer? Even if you could download your data, will it be accessible to you or will it be in a proprietary format only available from the provider?
  • How secure is your data from other eyes. It is possible that your closest competitor is, or will in the future be, using the same service that you are using and perhaps even shared resources. How will you know if your data has been stolen or hacked? Perhaps one way is to review the audit trail – if there is one. This may be able to let you know if someone is using an old forgotten login to access your data or if someone is having too much access that should be investigated (it may be a very valued employee so care must be taken as others may have obtained their login). How can you tell if a sysadmin has copied your data? What security is in place at the SaaS provider to ensure this does not happen? Are they open to an external audit of their security?
  • Backups. What backups are taken? If your data is found to be corrupt, how far back can you go to obtain valid data?
  • Deleted data. If you permanently remove some sensitive information, has it truly been removed? On what other systems has it been stored?
  • Can you download your own data? Even if you can, will you ever be able to access it or is it in a proprietary format only available from your provider?
  • Security. Many companies don’t even know how many computers connect to their data now, or what data reside on those computers and how and when they are accessed.

None of these items should stop you investigating Cloud Computing for your own organisation, however you should not abdicate your responsibilities to a third party provider. It is your data and your business that you are dealing with. It is up to you to not only obtain the cost savings that might ensure a good profitable business model for your company (or your employer’s company), but to ensure business continuity in the event of the unforeseen.

Many nay-sayers cite a few instances of data corruption or downtimes, however these must be put into perspective of supplying the service in-house. If an external audit was performed on your current in-house systems, would it pass muster? Have you complete security that your systems are safe and up to date with all patches? Do you know (I mean really know) who access your data now? If you had a disaster or fire on your premises that totally destroyed your server room and office, from an IT perspective would your business survive?

Although Cloud Computing has been around for a number of years, only recently under that name, it is still seemingly in its infancy. The take-up has not been rapid in some cases. While customers can hand over CRM and email systems to the Cloud, handing over the full enterprise system to the Cloud may be a little hard for most of us right now.

Read More

Read More

Mind Mapping

I have been using mind mapping software for many years now and it has often been a great boost to get clarity around thought.I know that I’ve often been told that my mind needs a map to get around and I’ve agreed with them. I know I  think the sam… … Read More

Read More

DataSnap: In-Process Server Method

DataSnap Server Method was introduced in Delphi 2009.  Most video or demo about DataSnap server method available only introduce socket based client server access communication. e.g.: TCP or HTTP protocol.
However, DataSnap was designed as a scalable data access solution that able to work with one, two, three or more tiers model.  All examples we see so far are suitable for 2 or 3 tiers design.  I can’t find any example talking about 1 tier or in-process design.
Indeed, it is very simple to work with in-process server method.  Most steps are similar to out-of-process server methods.

Define a Server Method

Define a well known EchoString() and a Sum() server method:

unit MyServerMethod;
interface
uses Classes, DBXCommon;
type
  {$MethodInfo On}
  TMyServerMethod = class(TPersistent)
  public
    function EchoString(Value: string): string;
    function Sum(const a, b: integer): integer; 
  end;
  {$MethodInfo Off}

implementation
function TMyServerMethod.EchoString(Value: string): string;
begin
  Result := Value;
end;

function TMyServerMethod.Sum(const a, b: integer): integer;
begin
  Result := a + b;
end;
end.

Define a DataModule to access the server method

Drop a TDSServer and TDSServerClass as usual to the data module.  Define a OnGetClass event to TDSServerClass instance.  Please note that you don’t need to drop any transport components like TDSTCPServerTransport or TDSHTTPServer as we only want to consume the server method for in-process only.
object MyServerMethodDataModule1: TMyServerMethodDataModule
  OldCreateOrder = False
  Height = 293
  Width = 419
  object DSServer1: TDSServer
    AutoStart = True
    HideDSAdmin = False
    Left = 64
    Top = 40
  end
  object DSServerClass1: TDSServerClass
    OnGetClass = DSServerClass1GetClass
    Server = DSServer1
    LifeCycle = ‘Server’
    Left = 64
    Top = 112
  end
end


unit MyServerMethodDataModule;
uses MyServerMethod;
procedure TMyServerMethodDataModule.DSServerClass1GetClass(DSServerClass: TDSServerClass;
    var PersistentClass: TPersistentClass);
begin
  PersistentClass := TMyServerMethod;
end;

Generate Server Method Client Classes

It is not easy to generate the server method client classes design for in-process server.  You may try any methods you are familiar with to hook up your server method to TCP or HTTP transport service, start the service and attempt to generate the client class by any means.
//
// Created by the DataSnap proxy generator.
//

unit DataSnapProxyClient;
interface
uses DBXCommon, DBXJSON, Classes, SysUtils, DB, SqlExpr, DBXDBReaders;
type
  TMyServerMethodClient = class
  private
    FDBXConnection: TDBXConnection;
    FInstanceOwner: Boolean;
    FEchoStringCommand: TDBXCommand;
  public
    constructor Create(ADBXConnection: TDBXConnection); overload;
    constructor Create(ADBXConnection: TDBXConnection; AInstanceOwner: Boolean); overload;
    destructor Destroy; override;
    function EchoString(Value: string): string;
    function Sum(const a, b: integer): integer;
  end;

implementation
function TMyServerMethodClient.EchoString(Value: string): string;
begin
  if FEchoStringCommand = nil then
  begin
    FEchoStringCommand := FDBXConnection.CreateCommand;
    FEchoStringCommand.CommandType := TDBXCommandTypes.DSServerMethod;
    FEchoStringCommand.Text := ‘TMyServerMethod.EchoString’;
    FEchoStringCommand.Prepare;
  end;
  FEchoStringCommand.Parameters[0].Value.SetWideString(Value);
  FEchoStringCommand.ExecuteUpdate;
  Result := FEchoStringCommand.Parameters[1].Value.GetWideString;
end;

function TMyServerMethodClient.Sum(a: Integer; b: Integer): Integer;
begin
  if FSumCommand = nil then
  begin
    FSumCommand := FDBXConnection.CreateCommand;
    FSumCommand.CommandType := TDBXCommandTypes.DSServerMethod;
    FSumCommand.Text := ‘TMyServerMethod.Sum’;
    FSumCommand.Prepare;
  end;
  FSumCommand.Parameters[0].Value.SetInt32(a);
  FSumCommand.Parameters[1].Value.SetInt32(b);
  FSumCommand.ExecuteUpdate;
  Result := FSumCommand.Parameters[2].Value.GetInt32;
end;

constructor TMyServerMethodClient.Create(ADBXConnection: TDBXConnection);
begin
  inherited Create;
  if ADBXConnection = nil then
    raise EInvalidOperation.Create(‘Connection cannot be nil.  Make sure the connection has been opened.’);
  FDBXConnection := ADBXConnection;
  FInstanceOwner := True;
end;

constructor TMyServerMethodClient.Create(ADBXConnection: TDBXConnection; AInstanceOwner: Boolean);
begin
  inherited Create;
  if ADBXConnection = nil then
    raise EInvalidOperation.Create(‘Connection cannot be nil.  Make sure the connection has been opened.’);
  FDBXConnection := ADBXConnection;
  FInstanceOwner := AInstanceOwner;
end;

destructor TMyServerMethodClient.Destroy;
begin
  FreeAndNil(FEchoStringCommand);
  inherited;
end;

end.

Invoke the server method via in-process

You may see from the following code that there is no different to access the server method for in-process and out-of-process design.
First, you create an instant of datasnap server.  This will register the DSServer to the TDBXDriverRegistry.  e.g. DSServer1 in this case.
You may then use TSQLConnection with DSServer1 as driver name instead of “DataSnap” that require socket connection to initiate in-process communication invoking the server method.
var o: TMyServerMethodDataModule;
    Q: TSQLConnection;
    c: TMyServerMethodClient;
begin
  o := TMyServerMethodDataModule.Create(Self);   Q := TSQLConnection.Create(Self);
  try
    Q.DriverName := ‘DSServer1’;     Q.LoginPrompt := False;
    Q.Open;

    c := TMyServerMethodClient.Create(Q.DBXConnection);
    try
      ShowMessage(c.EchoString(‘Hello’));
    finally
      c.Free;
    end;

  finally
    o.Free;
    Q.Free;
  end;
end;

Troubleshoot: Encounter Memory Leak after consume the in-process server methods

This happens in Delphi 2010 build 14.0.3513.24210.  It may have fixed in future release.  You may check QC#78696 for latest status.  Please note that you need to add “ReportMemoryLeaksOnShutdown := True;” in the code to show the leak report.

The memory leaks has no relation with in-process server methods.  It should be a problem in class TDSServerConnection where a property ServerConnectionHandler doesn’t free after consume.
Here is a fix for the problem:
unit DSServer.QC78696;
interface
implementation
uses SysUtils,
     DBXCommon, DSServer, DSCommonServer, DBXMessageHandlerCommon, DBXSqlScanner,
     DBXTransport,
     CodeRedirect;

type
  TDSServerConnectionHandlerAccess = class(TDBXConnectionHandler)
    FConProperties: TDBXProperties;
    FConHandle: Integer;
    FServer: TDSCustomServer;
    FDatabaseConnectionHandler: TObject;
    FHasServerConnection: Boolean;
    FInstanceProvider: TDSHashtableInstanceProvider;
    FCommandHandlers: TDBXCommandHandlerArray;
    FLastCommandHandler: Integer;
    FNextHandler: TDBXConnectionHandler;
    FErrorMessage: TDBXErrorMessage;
    FScanner: TDBXSqlScanner;
    FDbxConnection: TDBXConnection;
    FTransport: TDSServerTransport;
    FChannel: TDbxChannel;
    FCreateInstanceEventObject: TDSCreateInstanceEventObject;
    FDestroyInstanceEventObject: TDSDestroyInstanceEventObject;
    FPrepareEventObject: TDSPrepareEventObject;
    FConnectEventObject: TDSConnectEventObject;
    FErrorEventObject: TDSErrorEventObject;
    FServerCon: TDSServerConnection;
  end;

  TDSServerConnectionPatch = class(TDSServerConnection)
  public
    destructor Destroy; override;
  end;

  TDSServerDriverPatch = class(TDSServerDriver)
  protected
    function CreateConnectionPatch(ConnectionBuilder: TDBXConnectionBuilder): TDBXConnection;
  end;

destructor TDSServerConnectionPatch.Destroy;
var o: TDSServerConnectionHandlerAccess;
begin
  inherited Destroy;
  o := TDSServerConnectionHandlerAccess(ServerConnectionHandler);
  if o.FServerCon = Self then begin
    o.FServerCon := nil;
    ServerConnectionHandler.Free;
  end;
end;

function TDSServerDriverPatch.CreateConnectionPatch(
  ConnectionBuilder: TDBXConnectionBuilder): TDBXConnection;
begin
  Result := TDSServerConnectionPatch.Create(ConnectionBuilder);
end;

var QC78696: TCodeRedirect;
initialization
  QC78696 := TCodeRedirect.Create(@TDSServerDriverPatch.CreateConnection, @TDSServerDriverPatch.CreateConnectionPatch);
finalization
  QC78696.Free;
end.

Troubleshoot: Encounter “Invalid command handle” when consume more than one server method at runtime for in-process application

This happens in Delphi 2010 build 14.0.3513.24210.  It may have fixed in future release.  You may check QC#78698 for latest status.
To replay this problem, you may consume the server method as:
    c := TMyServerMethodClient.Create(Q.DBXConnection);
    try
      ShowMessage(c.EchoString(‘Hello’));
      ShowMessage(IntToStr(c.Sum(100, 200)));
    finally
      c.Free;
    end;

or this:
    c := TMyServerMethodClient.Create(Q.DBXConnection);
    try
      ShowMessage(c.EchoString(‘Hello’));
      ShowMessage(IntToStr(c.Sum(100, 200)));
      ShowMessage(c.EchoString(‘Hello’));
    finally
      c.Free;
    end;

Here is a fix for the problem
unit DSServer.QC78698;
interface
implementation
uses SysUtils, Classes,
     DBXCommon, DBXMessageHandlerCommon, DSCommonServer, DSServer,
     CodeRedirect;

type
  TDSServerCommandAccess = class(TDBXCommand)
  private
    FConHandler: TDSServerConnectionHandler;
    FServerCon: TDSServerConnection;
    FRowsAffected: Int64;
    FServerParameterList: TDBXParameterList;
  end;

  TDSServerCommandPatch = class(TDSServerCommand)
  private
    FCommandHandle: integer;
    function Accessor: TDSServerCommandAccess;
  private
    procedure ExecutePatch;
  protected
    procedure DerivedClose; override;
    function DerivedExecuteQuery: TDBXReader; override;
    procedure DerivedExecuteUpdate; override;
    function DerivedGetNextReader: TDBXReader; override;
    procedure DerivedPrepare; override;
  end;

  TDSServerConnectionPatch = class(TDSServerConnection)
  public
    function CreateCommand: TDBXCommand; override;
  end;

  TDSServerDriverPatch = class(TDSServerDriver)
  private
    function CreateServerCommandPatch(DbxContext: TDBXContext; Connection:
        TDBXConnection; MorphicCommand: TDBXCommand): TDBXCommand;
  public
    constructor Create(DBXDriverDef: TDBXDriverDef); override;
  end;

constructor TDSServerDriverPatch.Create(DBXDriverDef: TDBXDriverDef);
begin
  FCommandFactories := TStringList.Create;
  rpr;
  InitDriverProperties(TDBXProperties.Create);
  // ” makes this the default command factory.
  //
  AddCommandFactory(”, CreateServerCommandPatch);
end;

function TDSServerDriverPatch.CreateServerCommandPatch(DbxContext: TDBXContext;
    Connection: TDBXConnection; MorphicCommand: TDBXCommand): TDBXCommand;
var
  ServerConnection: TDSServerConnection;
begin
  ServerConnection := Connection as TDSServerConnection;
  Result := TDSServerCommandPatch.Create(DbxContext, ServerConnection, TDSServerHelp.GetServerConnectionHandler(ServerConnection));
end;

function TDSServerCommandPatch.Accessor: TDSServerCommandAccess;
begin
  Result := TDSServerCommandAccess(Self);
end;

procedure TDSServerCommandPatch.DerivedClose;
var
  Message: TDBXCommandCloseMessage;
begin
  Message := Accessor.FServerCon.CommandCloseMessage;
  Message.CommandHandle := FCommandHandle;
  Message.HandleMessage(Accessor.FConHandler);
end;

function TDSServerCommandPatch.DerivedExecuteQuery: TDBXReader;
var
  List: TDBXParameterList;
  Parameter: TDBXParameter;
  Reader: TDBXReader;
begin
  ExecutePatch;
  List := Parameters;
  if (List <> nil) and (List.Count > 0) then
  begin
    Parameter := List.Parameter[List.Count – 1];
    if Parameter.DataType = TDBXDataTypes.TableType then
    begin
      Reader := Parameter.Value.GetDBXReader;
      Parameter.Value.SetNull;
      Exit(Reader);
    end;
  end;
  Result := nil;
end;

procedure TDSServerCommandPatch.DerivedExecuteUpdate;
begin
  ExecutePatch;
end;

function TDSServerCommandPatch.DerivedGetNextReader: TDBXReader;
var
  Message: TDBXNextResultMessage;
begin
  Message := Accessor.FServerCon.NextResultMessage;
  Message.CommandHandle := FCommandHandle;
  Message.HandleMessage(Accessor.FConHandler);
  Result := Message.NextResult;
end;

procedure TDSServerCommandPatch.DerivedPrepare;
begin
  inherited;
  FCommandHandle := Accessor.FServerCon.PrepareMessage.CommandHandle;
end;

procedure TDSServerCommandPatch.ExecutePatch;
var
  Count: Integer;
  Ordinal: Integer;
  Params: TDBXParameterList;
  CommandParams: TDBXParameterList;
  Message: TDBXExecuteMessage;
begin
  Message := Accessor.FServerCon.ExecuteMessage;
  if not IsPrepared then
    Prepare;
  for ordinal := 0 to Parameters.Count – 1 do
    Accessor.FServerParameterList.Parameter[Ordinal].Value.SetValue(Parameters.Parameter[Ordinal].Value);
  Message.Command := Text;
  Message.CommandType := CommandType;
  Message.CommandHandle := FCommandHandle;
  Message.Parameters := Parameters;
  Message.HandleMessage(Accessor.FConHandler);
  Params := Message.Parameters;
  CommandParams := Parameters;
  if Params <> nil then
  begin
    Count := Params.Count;
    if Count > 0 then
      for ordinal := 0 to Count – 1 do
      begin
        CommandParams.Parameter[Ordinal].Value.SetValue(Params.Parameter[Ordinal].Value);
        Params.Parameter[Ordinal].Value.SetNull;
      end;
  end;
  Accessor.FRowsAffected := Message.RowsAffected;
end;

function TDSServerConnectionPatch.CreateCommand: TDBXCommand;
var
  Command: TDSServerCommand;
begin
  Command := TDSServerCommandPatch.Create(FDbxContext, self, ServerConnectionHandler);
  Result := Command;
end;

var QC78698: TCodeRedirect;
initialization
  QC78698 := TCodeRedirect.Create(@TDSServerConnection.CreateCommand, @TDSServerConnectionPatch.CreateCommand);
finalization
  QC78698.Free;
end.

Reference:

  1. QC#78696: Memory Leak in TDSServerConnection for in-process connection
  2. QC#78698: Encounter “Invalid command handle” when consume more than one server method at runtime for in-process application

Read More

Read More

Configure Windows 7 IIS7 for ISAPI DLL

Windows 7 IIS7 require some configurations to get ISAPI DLL works.  It is not that straight forward compare to IIS 5.

Install IIS 7

  1. Go to Control Panel | Programs and Features | Turn on Windows features on or off (require privilege mode).
  2. Check “Internet Information Services and make sure “ISAPI Extensions” and “ISAPI Filters” is checked as well.
  3. Click OK button to start installation.

After finish install IIS 7, open your favorite web browser and enter URL http://localhost/ to make sure the IIS is working and running.  You might need to check your firewall setting and add exception for port 80 TCP traffic if necessary.

Configure for ISAPI DLL

Add Virtual Directory

First, you may need to add a virtual directory to host your ISAPI DLL:

  1. Open Internet Information Service Manager (require privilege mode)
  2. Right click on “Default Web Site” node and click “Add Virtual Directory” of popup menu:

Enter “Alias” and “Physical Path” of the virtual directory:

Enable ISAPI for Virtual Directory

To enable ISAPI for the virtual directory:

  1. Select the virtual directory node (e.g.: “ISAPI” in this example). 
  2. Double click the “Handler Mappings” icon. 
  3. Click “Edit Feature Permissions…” in Actions panel
  4. A “Edit Feature Permission” dialog prompt out
  5. Check “Execute”.
  6. Click OK button to commit the changes.

Enable Directory Browsing for Virtual Directory

This is optional but is convenient.  To enable Directory Browsing for a virtual directory:

  1. Select the virtual directory node (e.g.: “ISAPI” in this example). 
  2. Double click the “Directory Browsing” icon.
  3. Click “Enable” in Actions panel.

Edit Anonymous Authentication Credentials

  1. Select the virtual directory node.
  2. Double click the “Authentication” icon.
  3. Click to select “Anonymous Authentication” item.
  4. Click “Edit…” in Actions panel.
  5. A dialog will prompt out.
  6. Checked “Application pool identity” and press OK button to commit changes.

Enable ISAPI modules

  1. Click on the root node.
  2. Double click the “ISAPI and CGI Restrictions” icon.
  3. Click ”Edit Feature Setting …” in Actions panel.
  4. Check “Allow unspecified ISAPI modules” option.  This option allow any ISAPI dll to be executed under IIS.  If you don’t use this option, you will need to specify a list of ISAPI DLLs explicitly.

Edit Permission for Virtual Directory

  1. Select the virtual directory node (e.g.: “ISAPI” in this example). 
  2. Right click on the node and click “Edit Permission” of popup menu.
  3. A Properties dialog prompt out.
  4. Switch to “Security” page
  5. Click Edit button to show Permission dialog.
  6. Add “IIS_IUSRS” into the permission list.

Enable 32 bits ISAPI DLL on IIS 7 x64

This is only require if you are using IIS7 x64 and would like to run 32 bits ISAPI DLL on the IIS.  If your ISAPI DLL and IIS7 is both x86 or both x64, you may skip this step.

  1. Click “Application Pools” node.
  2. Click “DefaultAppPool” item
  3. Click “Advanced Settings …” from Actions panel.
  4. A “Advanced Settings” dialog prompt out
  5. Set “Enable 32-bits Applications” to True
  6. Click OK button to commit changes

If you didn’t enable this options for 32 bits applications, you may encounter the following errors when execute the ISAPI from web browser:

HTTP Error 500.0 – Internal Server Error

The page cannot be displayed because an internal server error has occurred.

HTTP Error 500.0 – Internal Server Error
Module    IsapiModule
Notification    ExecuteRequestHandler
Handler    ISAPI-dll
Error Code    0x800700c1
Requested URL    http://localhost:80/isapi/isapi.dll
Physical Path    C:\isapi\isapi.dll
Logon Method    Anonymous
Logon User    Anonymous 

You may now deploy your ISAPI DLLs into the virtual directory and start execute the library from web browser.

DataSnap and ISAPI DLL

You may create Delphi DataSnap ISAPP DLL library and deploy on IIS.  From time to time, you may encounter compilation error during development or deployment time if you have consume the ISAPI DLL.  This is because the ISAPI DLL invoked will cache in the application pool.  You are not allow to overwrite the ISAPI DLL while it’s being cached.

To overcome this problem, you need to perform Recycle operation:

  1. Click “Application Pools” node.
  2. Right click on “DefaultAppPool” item and click “Recycle…” item.

Deploying as ISAPI DLL is encourage during deployment stage as IIS will cache the ISAPI DLL for performance consideration.

However, the caching might not feasible during development stage as recycling need to be performed while overwrite the ISAPI DLL either by frequent compiling or overwriting.  You may consider compile the server modules as CGI application in development time.  Each invocation of CGI is a separate OS process and won’t be cache by IIS application pool.

Install CGI on IIS

  1. Go to Control Panel | Programs and Features | Turn on Windows features on or off (require privilege mode).
  2. Check “Internet Information Services and make sure “CGI” is checked.
  3. Click OK button to start installation.

Enable CGI Module

  1. Click on the root node.
  2. Double click the “ISAPI and CGI Restrictions” icon.
  3. Click ”Edit Feature Setting …” in Actions panel.
  4. Check “Allow unspecified CGI modules” option.

Consume DataSnap Server Methods via URL

The DataSnap server methods are using JSON as data stream via REST protocol.  For example, a simple EchoString server method defined as:

type
  {$MethodInfo On}
  TMyServerMethod = class(TPersistent)
  public
    function EchoString(Value: string): string;
  end;
  {$MethodInfo Off}

implementation

function TMyServerMethod.EchoString(Value: string): string;
begin
  Result := Value;
end;

To access this method compiled in ISAPI DLL via URL, the URL is something like

http://localhost/datasnap/MyISAPI.DLL/datasnap/rest/TMyServerMethod/EchoString/Hello

and the response text will be:

{“result”:[“Hello”]}

Likewise, a CGI URL is

http://localhost/datasnap/MyCGI.exe/datasnap/rest/TMyServerMethod/EchoString/Hello

 

Reference:

  1. DataSnap 2010 HTTP support with an ISAPI dll; Author: Tierney, Jim

Read More

Read More