One of the most challenging things (for me at least) is dealing with JavaScript. That's why I love the AjaxToolKit from MS. Using these tools means I don't have to write a lot of javascript myself. The downside to this is that when the time does come to "get my hands dirty", I'm usually floundering around for a while until I hit on a nugget or two that I can use.
Such is the case with the subject of today's notes, so let's get to it, shall we?
The Scenario: Content page in a ASP.Net 3.5 Web Application that relies HEAVILY on the AjaxToolKit for much of it's functionality. (The fact that this is a Content Page is important here). This page has several UpdatePanels, most of which have triggers of some sort. the one that we will focus our attention on is nested inside the fourth TabPanel of a TabControl, and is triggered by an external Timer. This Timer updates the text inside our subject TextBox. I should also mention here that the TabControl itself is ALSO inside an UpdatePanel, so we have something like this (the sections in orange are the ones we will concern ourselves with):
<asp:Panel>
<asp:UpdatePanel>
<ContentTemplate>
<asp:TabContainer>
<asp:TabPanel ID="TabPanel1" />
<asp:TabPanel ID="TabPanel2" />
<asp:TabPanel ID="TabPanel3" >
<ContentTemplate>
<asp:Button ID="Button1" Text="Begin Long Process" OnClick="Button1_Click" />
</ContentTemplate>
</asp:TabPanel>
<asp:TabPanel ID="TabPanel4" >
<ContentTemplate>
<asp:Panel>
<asp:UpdatePanel>
<ContentTemplate>
<asp:TextBox TextMode="MultiLine" Width="550px" Height="455px" />
</ContentTemplate>
<Triggers>
<asp:AsyncPostBackTrigger ControlID="Timer1" EventName="Tick" />
</Triggers>
</asp:UpdatePanel>
</asp:Panel>
</ContentTemplate>
</asp:TabPanel>
</asp:TabContainer>
</ContentTemplate>
</asp:UpdatePanel>
</asp:Panel>
<asp:Timer ID="Timer1" runat="server" Interval="3000" OnTick="Timer1_Tick" Enabled="False">
</asp:Timer>
Dizzying to say the least, and I wrote it! OK let's move on.
The task, is to initiate a VERY long running process (NO WE ARE NOT THREADING AN ASP.NET APPLICATION!) using the Button1_Click event, and then update the status using TextBox1 on a different tab, with the Timer1_Click event. The timer interval is set at 3 sec, because we have no idea what the external system is going to do, how it will respond, or how long it will take.
As a side note here, the button that kicks this off is on another tab because there are some conditions on that tab that play into the external call. For this discussion, those conditions are not important.
Now that we have the UI set up, let's look at the server-side code. Remember, we are using the AjaxToolKit, and it's ability to handle most of the Javascript required to take care of switching tabs, and partial page refreshes, while still using the postback of the page to trigger the timer event (jQuery could do the same thing, but hooking to the endpoint of the external system was a bit more cumbersome in this scenario). Because this is a webApp, the external system is available server-side by adding a reference, and then hooking to the endpoint when we need to.
The Button Click event:
protected void Button1_Click(object sender, EventArgs e)
{
TextBox1.Text = string.Empty;
Timer1.Enabled = true; //this starts the timer
//change the ActiveTab property of the TabContainer to the tab that has the TextBox
try
{
//set the Text property of TextBox1 based on the conditions that have been previously met
}catch (Exception ex){
TextBox1.Text = ex.Message + ex.StackTrace;
Timer1.Enabled = false;
}
}
The Timer Tick event:
protected void Timer1_Tick(object sender, EventArgs e)
{
string result = <result of the cal to the external system>
bool done= <condition for process to exit>;
if( !done )
{
//put something in the textbox
}else{
//put something in the text box
//alert the user that the process is complete
}
}
So far so good. We've got UI, and server-side code, but that darn text box keeps going back to the top, no matter what we put in it. So we have to do something on the client-side to show some sort of progress. We are going to do this inside the ContentPlaceHolder that is handling the content for out page, to make sure that the ToolKitScriptManager on the MasterPage is already loaded. Otherwise, we get the dreaded 'Sys.Application is not defined' error, and we have a hard time (at least I did) figuring out why.
With the AjaxToolkit, there are all kinds of things that are registered with the page behind the scenes. One of them is the Sys.Application object. We are going to use this object to tell the page to load a JavaScript function every time this page is initialized. Because we are using a timer that is set to every three seconds, that is how often this action will happen, until the timer is turned off. So here we go, and I'll explain a bit as we go along:
<script type="text/javascript">
//first we tell the application that there is a javascript function that we need to add
Sys.Application.add_init(JavaScriptFunctionToRun);
function JavaScriptFunctionToRun(sender){
//next we tell the page that we need to do something after the HttpRequest is completed
Sys.WebForms.PageRequestManager.getInstance().add_endRequest(EndRequest);
}
function EndRequest(sender, args) {
//here we are telling the page WHAT to do at the end of the request
SetScrollPosition();
}
//and finally(!) the instructions to move the scrollbar to the bottom of the textbox
function SetScrollPosition(){
var textArea = document.getElementById('<%=TestBox1.ClientID%>');
textArea.scrollTop = textArea.scrollHeight;
}
</script>
If you've made it to this point in the post, I hope it was worth the effort. If your process is especially long, you can show the user that the page really is doing something by using the Session, a label and the TimeSpan object in the server-side code.
TaTa for now, and Happy Coding!
Musings and Observations in, on, and around .Net Development, and Software Development in General
Sunday, September 18, 2011
Tuesday, July 12, 2011
ClickOnce ... Through the Looking Glass
By now, I'm sure my developer buddies that are versed in the Microsoft(r) paradigm have at least "heard" of ClickOnce. That wonderful feature that was released with VS 2005, and has made many the small project ridiculously easy to toss onto a disc, push out to the company network, and even make it available to the general public. Configurable, and fully operational, out of the box.
But what happens when your project domain requires that development happen behind the firewall, and installation of the completed project must escape the friendly confines of the company network, and venture into the hands of an external source control system, that lives beyond the firewall? What is a dev to do? The beauty of the technology is the ability to continually ensure that the end user has the latest bits. But when the target machine is abstracted by this additional layer of complexity, things get a bit dicey, and a lot more challenging.
The scenario: A project is being developed on an internal company network, with the source code under a company approved source code repository. Pretty standard stuff. Here is where the wrinkle comes in. Upon deployment, the installation bits must reside in an EXTERNAL repository, that is also version controlled.. Sound crazy, it's not. It really does happen. So then, how can we take advantage of the simplicity of a ClickOnce deployment, in a complex environment like this?
First, let's set up the environment. We'll need:
The key lies in the design of the ClickOnce feature set. This type of deployment is designed to function over the network. Keep that in mind, as we move along. Something else to keep in mind is that the local machine is part of the network as well (as a developer, I forget that part sometimes). And, as we are talking about developing in a Microsoft-based environment, let's also consider the web server (IIS) on the local dev box ("127.0.0.1" to be specific) as part of the network too.
First, let's set up the environment. We'll need:
- Source control for the code base.
- Separate, external version control for the published artifacts
- IIS running on both the local dev box, and the installation machine.
The key lies in the design of the ClickOnce feature set. This type of deployment is designed to function over the network. Keep that in mind, as we move along. Something else to keep in mind is that the local machine is part of the network as well (as a developer, I forget that part sometimes). And, as we are talking about developing in a Microsoft-based environment, let's also consider the web server (IIS) on the local dev box ("127.0.0.1" to be specific) as part of the network too.
So far, our environment looks like this:
So here's what we do with this setup:
- Configure the local IIS with a virtual directory that points to a physical location on the local machine. This location MUST be the same physical location that the application will be installed from on the client.
- In the Publish tab of the project Properties, there is a text box to define where the application will be deployed to. In that box, define the publish location as" http://127.0.0.1/<project deployment location>/". In doing this, ensure that IIS on the development machine has a virtual directory of the name that you want, and that virtual directory points to the actual physical location that the application will look for when it is installed. This is important, because when the application is actually installed on the machine outside the firewall, it will look here to find the files required . And if set up properly, the developer can publish to this location on the dev box.
From here, the developer can write code to whatever he needs, and throw it to the local IIS, and then take that and upload the changes to the versioning repository (no we are not going to get into the fact that ClickOnce handles it's own versioning as part of the publish process).
I'm sure with the ability of ClickOnce technology to be used just about anywhere, this use case may seem unnecessary. From my perspective, you would be right, and those discussions were many, and often. In the end, this is what the client wanted, it works, and that's all that really matters
Wednesday, June 29, 2011
On the Outside Looking In
Scenario: You are working in a system where an external process must not only communicate with your subsystem, but must do so across an intranet, using web services as the communication mechanism. And to throw a little more into the mix, the entire process is automated.
So the Scenario looks like this:
So the Scenario looks like this:
What to do? The Client Processes must operate in the context of a user that is local to the client, and has access to the system level operations on the client machine. By default, the ASPNET user does not have these permissions.
If you are working with any IIS version 6 or later, the solution would be to use impersonation, and you would be correct. But in this time when enterprises like to hang on to their systems, and endure the slow torturous death of an operating system, we are working with IIS 5, where impersonation is not so easy to accomplish.
There are a couple of solutions to this connundrum, neither of which presented themselves at first glance (or at least they didn't to me)
- Use the Message Queue to communicate to a central processing module, placing messages on the queue with the web service, and removing them with the command module.
- Allow both User spaces to remain separate, by using the FileSystemWatcher object of the .Net framework.
After watching the remote invocation of the web service fail to gain access to some proesses that needed to be stopped as part of the functionality required, it became clear that permissions issue was the source of not only my troubles, but some that had come before me.
The previous solution used the Messaging Queue to communicate with the system. While it is a solution that deals with the issue at hand (communication is a limited permissions environment), the process was rather cumbersome.
The previous solution used the Messaging Queue to communicate with the system. While it is a solution that deals with the issue at hand (communication is a limited permissions environment), the process was rather cumbersome.
- Create and serialize a command object with several properties for transmission across the wire
- Receive the object on the remote machine , and place it on the message queue
- With the command module on the remote machine, remove the object from the queue, and de-serialize the command object to determine what process to run.
- Execute the process on the remote machine with the command module.
The code that accomplishes all of this is rather convoluted, so I'm not going to even attempt to post it. It took me two months to begin to understand what was happening when, and there are still some things I'm not clear on.
However, the solution that I chose is radically simpler, and still accomplishes the same thing. The command module running on the remote machine is still there, although there is a bit more functionality, and a lot less complexity.
The new process goes like this:
- From a web service on the server, make a call to a web service on the client with the name of the command that you want to run.
- On the remote machine: Build a web service that will receive the command (a string, mind you), and then run the necessary function to communicate to the command module with a file in a very specific place.
- Use the command module to perform the required functions, based on what the dropped file says to do.
The scenario itself is not a common one. Normally Web Services interact directly with a database of some sort, and then throw data back. Here we are talking about interaction with a remote host, and not trying to interfere with anything else.
With the web services on the remote machine, it was pretty simple (code is in C#):
string dropDir = Path.Combine(AppDomain.CurrentDomain.BaseDirectory,@"Reclaim\");
string dropFile = string.Format(dropDir + "KillProcess.txt",a.Name);
if (!Directory.Exists(dropDir))
Directory.CreateDirectory(dropDir);
StringBuilder bldr=new StringBuilder();
if(File.Exists(dropFile))
bldr.AppendLine(File.ReadAllText(dropFile));
bldr.AppendLine(a.Name);
File.WriteAllText(dropFile, bldr.ToString());
And in the command application that is running under the required permission set, you have this in one method of the application
string dropDir = @"C:\inetpub\wwwroot\<Application Path>\Reclaim\";
if (!Directory.Exists(dropDir))
Directory.CreateDirectory(dropDir);
FileSystemWatcher reclaimer = new FileSystemWatcher(dropDir);
reclaimer.EnableRaisingEvents = true;
reclaimer.NotifyFilter = NotifyFilters.LastWrite;
reclaimer.Changed += new FileSystemEventHandler(reclaimer_Changed);
and in the event handler for the FileSystemWatcher
if (File.Exists(dropFile))
{
MethodCall();
string[] procNames = File.ReadAllLines(dropFile);
if (procNames.Length >= 3)
{
foreach (string str in procNames)
{
App a = AppList.Where(ap => ap.Name == str).FirstOrDefault();
if (a != null)
{
MethodCall(a);
System.Threading.Thread.Sleep(3000);
Ano(a);
}
}
File.Delete(@"C:\Inetpub\wwwroot\<Application Path>\Reclaim\KillProcess.txt");
}
}
And that's it! Simple call to a web service on the remote machine, and an application running on the remote machine and waiting for changes in the file to act on
Sunday, February 27, 2011
Boise Code Camp ver. 2011
Spent the day at Boise Code Camp 2011, or Geekfest 2011, as my wife calls it . And it was good for her because she's not been feeling well of late, and she got to rest without me fawning over her and stressing her out even more.
But on to the upside, I learned a lot today. The problem is going to be retaining int all, and then finding ways to implement any of it. Here's some of what I learned:
But on to the upside, I learned a lot today. The problem is going to be retaining int all, and then finding ways to implement any of it. Here's some of what I learned:
- jQuery has the power to completely rewrite your entire web UI (including the CSS) under the right conditions. Please don't ask me what they are, because I haven't even BEGUN to play with it yet. I'm still trying to wrap my brain around CSS and Javascript int he first place.
- WPF and Silverlight. Need I say more?
- Mobile computing may become less business-friendly in the future. Why? See my explanation below.
- In this country there really ARE kids that are moving into engineering fields. Just ask David Star of Elegant Code.com. One of them is his son.
OK, this mobile computing thing. With the advances in Andriod, iOS, and Windows Phone 7, along with the HUGE leap in the available memory on today's phones in general, and smartphones in particular, one would think that it would be easier to have data run on the compact device as opposed to having to access business data from "The Cloud". Not so fast. I had a small discussion with the Microsoft Developer Evangelist for the Northwest area (Mithun is a really nice guy) today, and he gave me the short version of a few things that I had not taken the time to think about.
First, is the memory footprint that would be involved. For SqlCE, the size limit on the database is 4GB, and then at some point, it has to be hooked up to a full install of SQL server anyway, in order to do anything with all that data. The second thing that we talked about was what I felt was a missing piece in VS 2010. I don't know how many have yet to realize this, but Visual Studio 2010 DOES NOT CURRENTLY SUPPORT the Compact Framework (this is really strange because with WinPhone 7 kernel runs on top of it, and you can only do WinPhone 7 development in VS 2010). Mithun couldn't give me an answer on that one, except to say that he is as hopeful as we are that this issue gets resolved. He couldn't give me a timetable, or even a solid commit on if it's even planned.
Bed Time now.
Good-night
Subscribe to:
Posts (Atom)