Javascript @ DevSummit

Javascript @ DevSummit

It’s that time again! Looking over the javascript sessions at this year’s Esri DevSummit, I’m really excited to see so many more sessions that cover integrating jsapi or esri-leaflet with other front-end frameworks.

Also – the online planner / agend app is much improved and supports deep-linking, so you can just click the links below, for more details, and add them to your schedule.

My Sessions

These are the sessions I’m presenting / co-presenting. Not listed is the “Modern Web Development” workshop, which sold out really quickly. BUT if you would be interested in having this workshop available at the UC, let me know. (dbouwman AT esri DOT com).

Session When Where
Choosing the Best JavaScript Framework for You Tuesday, 10 Mar 2015, 1:00pm – 2:00pm Primrose C/D
JavaScript Unit Testing Tuesday, 10 Mar 2015, 4:00pm – 5:00pm Primrose C/D
Declarative Mapping Applications with AngularJS Wednesday, 11 Mar 2015, 5:30pm – 6:30pm Demo Theater 2 – Oasis 1
Choosing the Best JavaScript Framework for You Friday, 13 Mar 2015, 1:00pm – 2:00pm Primrose B

Thinking outside the Dojos

This is a list of other sessions that will cover using maps in non-dojo javascript apps, or how to integrate the Dojo-based API with other frameworks. If you are building line-of-business applications that need mapping, or just want to use Angular, Ember, React etc, these should be great sessions.

Session When Where
Introduction to AngularJS Workshop Sunday, 8 Mar 2015, 8:00am – 5:00pm Hilton Palm Springs
Introduction to AngularJS Workshop Monday, 9 Mar 2015, 8:00am – 5:00pm Hilton Palm Springs
Esri Leaflet: An Introduction Wednesday, 11 Mar 2015, 1:00pm – 2:00pm Pasadena/Sierra/Ventura
Esri Leaflet: Advanced Topics Wednesday, 11 Mar 2015, 2:30pm – 3:30pm Pasadena/Sierra/Ventura
Extend Esri Leaflet with Leaflet Plug-ins Wednesday, 11 Mar 2015, 4:00pm – 5:00pm Pasadena/Sierra/Ventura
Think Fast: A Quick Look into React and Webpack Wednesday, 11 Mar 2015, 4:30pm – 5:00pm Mesquite B
Front-End Superheros Wednesday, 11 Mar 2015, 2:30pm – 3:00pm Demo Theater 2 – Oasis 1
JavaScript: the Weird Parts Wednesday, 11 Mar 2015, 4:30pm – 5:00pm Demo Theater 2 – Oasis 1
Building Map Apps with Knockout and the Esri JavaScript API Thursday, 12 Mar 2015, 1:00pm – 1:30pm Mesquite B
Make JavaScript Lean, Mean, and Clean Thursday, 12 Mar 2015, 1:30pm – 2:00pm Mesquite B
Koop: Using 3rd party services within the ArcGIS Platform Thursday, 12 Mar 2015, 1:00pm – 2:00pm Pasadena/Sierra/Ventura
ES 6, Web Components and the Future of JavaScript Thursday, 12 Mar 2015, 4:00pm – 5:00pm Pasadena/Sierra/Ventura
Return of Killer Apps: Buggier, Bolder, Bitter Thursday, 12 Mar 2015, 5:30pm – 6:30pm Pasadena/Sierra/Ventura

Dojo Related Sessions

Session When Where
ArcGIS GeoEvent Server: Real Time Web Applications Tuesday, 10 Mar 2015, 2:30pm – 3:30pm Mesquite B
Build System Automation Tuesday, 10 Mar 2015, 5:30pm – 6:00pm Demo Theater 2 – Oasis 1
ArcGIS API for JavaScript: What Have you Done for Me Lately Tuesday, 10 Mar 2015, 2:30pm – 3:30pm Primrose B
Building Mobile Web Apps Tuesday, 10 Mar 2015, 5:30pm – 6:30pm Primrose B
Modular JavaScript Wednesday, 11 Mar 2015, 1:00pm – 2:00pm Demo Theater 2 – Oasis 1
Dojo: The Better Parts Thursday, 12 Mar 2015, 9:00am – 10:00am Smoketree A – E
ArcGIS API for JavaScript: Data Visualization Thursday, 12 Mar 2015, 9:00am – 10:00am Pasadena/Sierra/Ventura

Javascript Developers at Esri UC: Let’s Talk!

The 2014 Esri User Conference is just around the corner, and I’m all excited to talk to other developers building applications with the Esri platform.

While I will be spending much of my time at the ArcGIS Online Island (insert “voted off” joke here), I would love to meetup with web / javascript developers that are actively working with the JS API, Esri Leaflet, using ArcGIS Online webmaps, story maps, using or building web app templates and the Web App Builder.

Building Apps

The technologies used in web development today are changing faster then ever, and so I’m especially interested in discussing your experiences building large web applications, integrating with other frameworks (Backbone, EmberJs, AngularJs etc), and getting your feedback on what you would like to see in the future – i.e. how you seem maps fitting into “web componenets”, javascript build systems, dependency management (requirejs vs commonjs + browserify), and package management (npm vs bower)

If this sounds interesting, just roll over to the ArcGIS Online Island and find me – I’ll be there most of the day Tuesday, Wednesday and Thursday morning.

Alternatively, if you want to have a sit-down – I’ve setup some meeting time-slots via – I’ve never used this site before, it will be bit of an experiment :)

Javascript Talks @ UC

Derek Swingley whipped up a quickie “bare-bones” UC Agenda app, that conveniently supports deep-linking to searches – so here is a list of all the javascript sessions (Yeah Technology!)

On Tuesday from 3:15 to 5:00 I’ll be doing “Speed Geeking”, where I’ll show how to use Yeoman to create and deploy a well-architected web app (using Bootstrap, Backbone, Marionette and Esri-Leaflet) in ~2 minutes.

I’m also giving a 30 demo theater talk called “Javascript Sanity”, where I will be discussing how we built ArcGIS for Open Data, specifically why we chose BackboneJS for the front-end and AngularJS for the Admin interface. I will also talk about our “developer workflow”. This is at the Esri Labs Demo Theater in Exhibit Hall B at 1:30 on Wednesday. Immediately following this session is Andrew Turner talking about the server-side of ArcGIS for Open Data – if you are into Ruby on Rails, be sure to hit this one.

Patrick Arlt from the Portland Research Center will be talking about esri-=leaflet in the Esri Labs Demo Themater, on Tuesday at 4:30pm, and then talking about AngularJS in the Developer Island Demo Theater, at 2:30 on Wednesday.

And for those interested in Open Data, I’m apparently giving/involved in one or more of the “”ArcGIS Online: ArcGIS for Open Data – An Introduction” sessions – all at 8:30am… apparently civic minded people are also early risers :)

Safe travels and see you in San Diego!

DevSummit and DevMeetup Videos

Just a quick note that the video of the Javascript Unit Testing talk from the Esri 2014 Developer Summit is now up. We gave the talk twice, and while there are two recordings of the talk, the first one had some video and audio issues, so the one below is the one to watch (literally!)

Direct link to video

In this talk I start things off with the “zen of testing”, then David Spriggs talks about using the Intern, followed by Tom Wayson discussing Karma, and I close the talk with Grunt + Jasmine + automation.

Here is a PDF of the slide deck as well – may be useful in discussions with others.

All the tools we discuss in the session are listed in in Tom’s github account.

Speaker Info

DevMeetup Video

I also gave a quick presentation at the Fort Collins Dev Meetup, and recorded a screencast. In it, I talk about the soon-to-be-released-in-beta ArcGIS Open Data project at a high-level, and then demonstrate the front-end developer workflow related to automated linting and unit testing. I then showed some of our integration tests running (using selenium, driven by mocha + wd.js). Finally I talked about some work I’m doing taking “best practices” from the Open Data project and creating yeoman scaffolders to help people start off new projects with all the infrastructure in place. For this demo, I scaffolded and published a (really simple) web app up to github pages in ~2 minutes. As this moves forward I’ll be posting about the scaffolder’s themselves as well as how to create scaffolders.

Direct link to video

Related Posts

In the video I demonstrate a number of the tools and concepts that I’ve written about in these posts…

Chasing Numbers: Pragmatic Javascript Code Coverage

When writing unit tests, we want to spend our time wisely, and focus our efforts on areas of the application where test coverage provides the most benefit – typically this means business logic, or other complex “orchestration” code. We can gain insight into this by using “code coverage” tools, which report back information about the lines of code that area executed during your tests.

Coverage is typically reported as a percentages – percentage of statements, branches, functions and lines covered. Which is great… except what do these numbers really mean? What numbers should we shoot for, and does 100% statement code coverage mean that your code is unbreakable, or that you spent a lot of time writing “low-value” tests? Let’s take a deeper look…

Here is the output from our automated test system (grunt + jshint + jasmine) on our project… Included is a “coverage summary…

These numbers have been holding steady throughout the development cycle… but what do these numbers mean? Lets break them down a little

Coverage Measures

The first one is “statements”. In terms of code coverage, a “statement” is the executable code between control-flow statements. On it’s own, it’s not a great metric to focus on because it ignores the control-flow statements (if/else, do/while etc etc). For unit tests to be valuable, we want to execute as many code “paths” as possible, and the statements measure ignores this. But, it’s “free” and thrown up in our faces so it’s good to know what it measures.

Branches refer to the afore mentioned code-paths, and is a more useful metric. By instrumenting the code under test, the tools measure how many of the various logic “branches” have been executed during the test run.

Functions is pretty obvious – how many of the functions in the codebase have been executed during the test run.

Line is by far the easiest to understand, but similar to Statements, high line coverage does not equate to “good coverage”.

Summary Metrics

With that out of the way, let’s look at the numbers…

Since statements is not a very good indicator, let’s skip that. We notice it, but it’s not something we strive to push upwards.

On Branches we are at ~42%, which is lower than I’d like, but we’ll get into this in a moment.

Functions are ~45%, but this is a little skewed because in many controllers and models we add extra functions that abstract calls into the core event bus. We could inline these calls, but that makes it much more complex to create spies and mocks. In contrast, putting them into separate functions greatly simplifies the use of spies and mocks, which makes it much easier to write and maintain the tests. So, although creating these “extra” methods adversely impacts this metric, it’s a trade off we are happy with.

Where does this leave us? These numbers don’t look that great do they? Yet I’m blogging about it… there must be more.

Detailed Metrics

These summary numbers tell very little of the story. They are helpful in that they let us know at a glance if the coverage is heading in the right direction, but as “pragmatic programmers” our goal is to build great software, not maximize a particular set of metics. So, we dig into the detailed reports to check where we have acceptable coverage.

Detailed Code Coverage

The report is organized around folders in the code tree, and summarizes the metrics at that level. I’ve sorted the report by “Branches”, and while we can see a fair bit of “red” (0-50% coverage) in that table, the important thing is that we know what has low coverage – as long as we are informed about the coverage, and we decide the numbers are acceptable, the coverage report has done it’s job. 

File Level Coverage

Diving down to the file level, we can check if we have high-levels of coverage on the parts of the code that really matter. For us, the details controller and view are pretty critical, so we can see that they have high coverage. It should be noted that high coverage numbers don’t always tell the whole tale. For critical parts of our code base, we have many suites of tests that throw all manner of data at the code. We have data fixtures for simulating 404’s from APIs, mangled responses, as well as many “flavors” of good data. By throwing all this different data at our objects we have ‘driven’ much of the code that handles corner cases.

Here is a look at the Models in our application.

Model Coverage

We can easily tell that our models have very good coverage – and this recently helped us a ton when we refactored our back-end API json structure. Since the models contain the logic to fetch and parse the json from the server, upgrading the application to use this new API was relatively simple: create new json fixtures from the new API, run the tests against the new fixtures, watch as most of the tests fail, update the parser code until the tests pass and shazam, the vast majority of the app “just worked”. Without these unit tests, it would have taken much longer to make this change.

The system we use allows us to dive down even further – to check the actual line-by-line coverage.

Line coverage

Adding Code Coverage Reports

There are a number of different tools that can generate code coverage reports. On our project we are using istanbul, integrated with jasmine using a template mix-in for grunt-contrib-jasmine. Istanbul can also be used with the intern, and karma test runners. If you are using Mocha, check out Blanket.js

If you are just getting into unit testing your javascript code, this is kinda the deep-end of the pool – so I’d recommend checking out jasmine or mocha, and get the general flow of js unit testing going, look at automating things with a runner like karma or jasmine, and then look at adding coverage reporting.

Hopefully this helps show the benefit of having code coverage as part of your testing infrastructure, and helps you ship great code!

Transitioning To Javascript

Over the past few weeks I’ve been getting quite a few people asking questions about transitioning to javascript – perhaps this recent post about Esri’s Roadmap for Web Developers has spurred more people into action – whatever the cause, I thought I’d share a few thoughts and links.

First off, now is a great time to get into javascript! jQuery has leveled the playing field across browsers, and the truly horrible versions of Internet Explorer are nearly in behind us. The community is exploding, and it seems every day there is some new exciting project in javascript.

Javascript Application Architecture

I’ve got some great news: over the last few years javascript has matured as a language and as a community. No longer are javascript applications “spaghetti” code by default, and cross-browser issues are much less common and painful than in the past. Myriad Model-View-Something frameworks exist to provide structure for your code, and if you’re doing anything more complex than “Hello World” I’d strongly recommend investing in learning one (or more).

I was going to list out a bunch of frameworks along with pros and cons, but then I remembered this video by Rob Conery titled “Javascript Infero”. I really like this talk as it compares 4 javascript frameworks - KnockoutBackboneAngular, and Ember. Go ahead and click through and watch it now… I’ll wait here…

Conery javascript inferno

Additional Framework Thoughts…

BackboneJS + Marionette

Backbone was the first of the client-side MV* frameworks that really took off. It’s also barely a framework – very un-opinionated, thus allowing you do to virtually anything. Marionette is a backbone extension that helps developers implement additional patterns by adding in formal Modules, Controllers, Layouts, Regions, various types of views, as well as an Application. Leveraging these greatly streamlines development both by reducing repetitive code and enforcing a development pattern. Personally I liked this stack because it give you lots of freedom, while still providing pattern guidance. Coming from ASP.NET / C# on the backend, this resonated with me. Last spring I did a 6 part series on building a mapping app using Marionette, that would be a good intro if Backbone + Marionette sound appealing.

For what it’s worth, this is what we used to build the ArcGIS for Open Data application, as it gave us the benefit of solid structural and architectural patterns, while still leaving us lots of flexibility to implement the interface behavior we wanted.

Polymer (aka Web Components aka The Future)

Polymer is a Google project that lets you build applications based on Web Components. The tricky bit is that Web Components is an emerging W3C standard, and no browsers have support for them yet. Undaunted, Polymer provides a set of polyfills (stop-gap code) that let you build and use web components today (IE10+ and evergreen browsers). This project just hit “alpha” in mid-February 2014, so it’s great for experiments, but I’d recommend staying away from this for production. That said, Web Components will be the future of the web, so it is worth getting a general understanding of the concepts. Another side note – both Ember and Angular are aligning themselves to slip-steam their view infrastructure into web components.

General Stuff You Should Know / Use

Underscore / Lo-Dash

Underscore is a utility belt of awesome stuff that should be part of javascript but is not (yet). Lo-dash is the mo-better-faster implementation of the same library (yeah competition!). Get to know one/both of these, as they will save you a ton of time and effort. What is really great is that these libraries are smart enough to use native implementations of functions when they are available in the running browser, so you can use the same “code” in your app and in down-level browsers it will use a javascript implementation, but in newer browsers it will use the underlying C++ implementation.


Bootstrap is a css framework that allows you to create a “reasonable” web app in minutes – no wonder it’s the most popular front-end framework! Sensible defaults based on a responsive base means that you can throw markup into a file, and after only a few minutes reading the documentation, have a site that looks good on a 27inch iMac and on your phone.

What’s more – Bootstrap is so popular that when you want to level up, there are tutorials on creating themes, or you can skip that and drop two lattes worth of cash on a pre-made theme. Boom. Beautiful, and you don’t have to fight the css. Bootstrap also comes with a bunch of optional javascript helpers. Start by using them, and then as you transition up to using a framework like Angular/Ember/Backbone, you can switch over.

Getting Started…

In the famous words of Nike: Just Do It. Start something – anything. Throw it up on github. A great starting point for working with maps is the bootstrap-map-js repo. Slap up something simple. Then build something else.

Will it break? Yes. Will you have problems with Internet Explorer? Yes. Will you scream at your monitor and rue the day you learned how to spell javascript? Likely.

But honestly, that’s learning. You did that when learning Silverlight or Flex. And really, if you are going to work on the web, javascript needs to be your new best friend. Think of it like this – you could switch to native app development, and then you’d have to know Objective C for iOS, Java for Android, and C# for Windows Phone/8.

In comparison, javascript is not so bad :-)


Checkout my previous blog post on Leveling up your Javascript for lots of links.

YouTube has ton’s of resources for Angular and Ember

Html5Rocks & Polymer-Project – lots of info on Polymer and Web Components

Telemetry Part 2: The Code

In part 1 we covered the three most common types of telemetry data we want to collect. In this part we will review how to actually implement tracking within your code.

For this example we are going to use Google Analytics. Simple, Free, and virtually ubiquitous. Of course you could send this information to another service or a custom back-end, but that’s beyond the scope of this discussion.

Google Analytics API

The three types of telemetry we want to track – page views, user actions and in-browser performance – all map very nicely to three calls in the Google Analytics API:

Although analytics.js gives you a means to send this information to the backend service, it’s not the sort of thing that we want littered all over our code. Thus…

Separation of Concerns

Before we start into the details, let’s talk application design for a moment. While you could simply litter the code with calls to the analytics API, that would make a mess, and be a huge pain should you want to change to use some other tracking system.

Instead, we want to centralize the tracking and storing of telemetry in a central service. The specifics depend on your framework (Backbone/Angular/Ember/Dojo/other) but you will likely have some sort of “global event bus”, or a core “Application” object which all elements of your application can access.

In our application, we are using Backbone and Marionette, and so we have added methods to our “Application” object, as that is passed into all the modules.

Tracking Page Views

Again, your application architecture will dictate where to attach these events, but in frameworks that have the concept of a router, that’s a good place to start. For all “navigation” actions in our application, follow the same pattern and centralize things onto an application method, specifically Application.navigate(). Super handy, because we can just drop in the page view logging in this one function as shown below:

The actual logPageView function is just a call to the analytics function.

Tracking User Actions

As we mentioned in the last post, this is simply a means to track what the user has interacted with. So, any place you have DOM event handlers, we want to assign that action a name, and make a call to Application.logUiEvent() method.

Depending on the amount of “magic” your application framework comes with (I’m looking at you Ember and Angular) this may be more or less difficult. Even with Marionette in the mix, our Backbone based app is pretty un-magical. All DOM interaction happens in handlers, defined in Views. So, all we do is drop in calls to logUiEvent in these handlers. While this could be made even more elegant by overriding the backbone and marionette view event binding infrastructure, in the interest of keeping the code easy to understand, we opted to just add these calls. Here is an example from one of our views.

Tracking In-Browser Performance

This is the trickiest of the bunch. As we mentioned, we need two calls – one to setup the timer, and a second to indicate the event we were tracking has completed. Or failed. We also need to handle the case where it does not complete.

For this we created a timer object – “Took.js” – which grabs a time stamp when it’s instantiated, and calculates the duration until the stop() method is called. We also have two additional methods – store() and log().

Here is a jsbin that you can play with as well (obviously it won’t report to Google Analytics)

We also expose this via the Application object as two simple methods startTimer(name, category, label, maxDuration) and stopTimer(name).

Through our code we wrap the various blocks we want timing data on in these two calls. Before we show an example, this brings up another area where we have centralized things – xhr calls. Although Backbone has a dependency on jQuery, and we could use $.ajax or $.getJson anywhere in the application, we have decided to route all requests through a central location – again on our Application object.

Anyhow – here is an example of a call that fetches rows from a remote feature service.

Using Telemetry

At this point, our project has not gone live, so we just have telemetry from dev and test environments in Google Analytics. That said, having this setup well before launch has already helped us re-arrange some of our naming conventions, and validated some ideas about the types of reports we can get out of the system.

Since we know that javascript performance varies greatly between browsers (orders of magnitude between recent browsers and older versions of Internet Explorer), we really wanted to make sure we could segment our performance data by browser and version.

Turns out that segmenting the data like this is not “built-in”, but it’s not hard to setup. Basically you create new “segments” and in the UI for that choose “Technology”, and then Browser & Browser Version.

With this in place, we can now easily compare performance of specific actions between different browsers. NOTE: data in this screen shot is from development – the actual performance is *much* better in production where all the code is combined and minified :-)

Analytics Example


We hope this helps you get started with telemetry in your javascript applications. This is exteremly useful information to have, and now more than ever, it’s very easy to get. Happy coding!

Truth in Numbers: Telemetry for Javascript Apps


Designing an application is really a set of educated guesses about what features users will actually use. We do interviews, conduct user testing sessions, and use our existing understanding of the “ideal user”, but without data coming in from real users, we just don’t know. In this two part series we will show you how to get this sort of data.

Telemetry: The Raw Numbers

The idea here is simple: track user actions in your application and send that information (“telemetry”) to a service that will aggregate it. Most analytics packages do this, but they are oriented towards so called ‘normal’ web pages, and the information they provide for javascript heavy pages (aka apps) is very limited without doing some additional work.

Types of Telemetry

We are interested in three classes of telemetry data – Page Views, User Actions, and in-browser performance. In this post we will introduce these classes, give some ideas of what to track, and how to organize things. Next post we will review how to actually implement the tracking.

Page Views

While there are many “page view” tracking techniques, they usually revolve around full-page refreshes. Of course, modern javascript applications eschew the full-page refreshes in favor of a more immersive applications that manipulate the url via html5 push-state.

Put another way, a user may spend 40 minutes using your web app, but traditional “page view” measurement may only register a single “hit” – when the page first loads.

Thus, when building rich javascript application, we need to help out and manually collect this information as the user changes pages, or “context” within your application. While most “single-page applications” don’t have clear boundaries between “pages”, we can usually break things down into reasonable units. Most applications will have some “home” or landing view, a search view, one or more types of search results views, item detail views etc etc.


The main thing is to come up with a naming convention for these “pages” that is consistent and makes logical sense.

We will look at the details of how to integrate this sort of tracking in the next post, but if your application has a “router”, that is likely a good place to attach page view logging.

User Actions

Being able to track User Action is the core to being able to tell which features are actually being used. Since you are instrumenting the code yourself, you can track virtually anything that raises an event.

In our case, we want to measure the percentage of users change the base map, the size of the map… and virtually every other interface interaction.

The really boils down to adding code into every DOM event handler in the app, so how we structure our telemetry helpers will be really important – we don’t want to have brittle code littered all over the app. Think about having a central “telemetry service” that is available to all views or DOM events in the application.

Having a good naming convention is even more important for this type of tracking since these will likely be added by more than one developer and a mish-mash of naming will make the telemetry data a mess to work with. On the upside, it’s pretty easy to tweak

In-Browser Performance

The third type of information we want to track is related to performance. For our team, we develop on Chrome Canary, on maxed out Retina Macbook Pros while using high-speed internet. Unfortunately not all our users will have such an optimized environment. Add the fact that we are supporting IE8/9/10/11, Chrome, Firefox, Safari and Opera, virtually the only way to get realistic performance information for all those platforms is to harvest it from real users.

The end-goal of course is to help improve the real-world performance of the application. But, before we start wildly poking around the code base tweaking things we think may be performance bottle-necks, we want to have the system instrumented so we know where the real bottle necks are, and that when we deploy changes, we really do see improved performance.

So – what do we want to time? Initially, for our project, we want to track basic page load times, network calls (xhr’s), and computationally intensive code blocks (client-side filtering). Some specific timers:

  • how long did it take to load the page and initialize the app?
  • how long did it take to initialize the map?
  • how long did it take to execute a search?
  • how long did it take to display a layer?
  • how long did it take to sort a table?
  • how long did it take to filter a table?

While page view and events are essentially single calls, tracking timing requires two actions – one to start a timer, and a second to stop it and record the duration. Once again, having sensible, consistent naming is really helpful.

In the second part of this post, I will talk about how to integrate telemetry into an application.

Radio Telescope photo modified from Stephen Hanafin‘s Flickr stream. cc by-sa.

Working Around Min/Max Scale

Another quick trick when using the Esri Javascript API. If you run into a scenario where the service you are accessing has min/max scales applied, but you need the data outside that scale range, here is a trick that can help out.

Before we get to the how, there are a few things you should know about this trick:

Data Holes: When Max Record Count Bites Back

Even with the gridded queries that dynamic mode feature layers use, at small scales (zoomed way out) it is common that you will be requesting more than the max record count number of features. When this happens, you get “holes” in the data returned, as shown below.

data holes due to max record count

The default for max record count is usually 1000 features, but I think the latest release bumps this up to 2000. Regardless of the default, this value can be changed as part of the server configuration, so unless you control the server, it’s not something you can change. 

Performance May Suffer

Scale ranges are commonly used to avoid sending very detailed or dense data over the wire. So, even if the layer you are working with only has a few dozen features, if they are really detailed geometries, things may get really slow, so perhaps reconsider.

How To

It’s actually really easy. Create a feature layer, then in it’s “load” event, reset the min/max scale properties. Then add it to the map.

Here is a link to a JSBIN you can play with.

The map below shows a feature layer from a demo server that has a minScale of 100,000 shown on a map that is at 1:36,978,595

Scale-free FeatureService

And of course you can use the same technique with MapService layers. Shown below is a layer that has a maxScale of 1,000,000 and we are zoomed in well past that.

MapService as FeatureService

Feature Layers from Map Services

Here is a lesser known fact: using the ArcGIS Javascript API, you can create a FeatureLayer from any vector layer in a MapService. Of course it will be read-only, but you still have all the usual control over it in terms of styling and interaction. If that works for you, it’s really simple: just drop the full url to the layer in the map, into the FeatureLayer constructor, and you’re up an running.

Of course you should be careful with this technique. Many times MapServices are used to display very dense data, so you may end up pulling a lot of data over the wire. But, if you happen to need more interactivity from a layer that’s already published as a MapService, this is a great way to avoid having to publish a FeatureService.

Here is a JSBin with the example running.

MapService Layer as FeatureLayer

Esri Dev Summit: The Javascript Sessions

Javascript @ Dev Summit

Took a look through the online Agenda for the 2014 Esri Developer Summit and was really impressed at all the Javascript sessions – so I thought I’d high light some of them here.

This list is mainly focused on advanced sessions, but for those transitioning to Javascript, there are a bunch of sessions early in the week. Just search for “Javascript” using the agenda. I’d link to those sessions but for some reason that’s not possible. (Apparently you can link to the sessions)

Upping Your Javascript Game

These sessions will help you get a better handle on the language, the tools and the general workflows of the modern javascript developer.

Javascript Tooling

Joshua Peterson is cooking up a great session talking about Chrome Dev tools, Postman, Grunt, Bower, SublimeText, grunt… This should be standing room only.

Monday 5:15pm – 5:45pm Demo Theater 2 – Oasis 1

Modular Javascript

Derek Swingley (@derekswingley) & Matt Driscoll will talk about leveraging AMD in your Dojo based applications.

Mon 11:30am – 12:00pm Demo Theater 2 – Oasis 1

ArcGIS API for Javascript

These are a sub-set of the sessions directly related to working with the JS API. There are quite a few more, but these hit the “new” stuff.

Working with WebMaps

Jeremy Bartley & Kelly Hutchins – Learn how to author and consume webmaps in the JS API. While not strictly necessary, leveraging web maps in your javascript apps can massively streamline your code.

Mon 3:30pm Demo Theater 2 – Oasis 1

Rethinking How you Style Your Maps

Jeremy Bartley & Jim Herries – Learn about the new renderes and styling options in the javascrip api, including statistically significant heat maps, dot density, and aggregation of your data to standard geographies. Lots of this stuff is leveraging web workers – very cool.

Mon 1:00pm – 2:00pm Pasadena/Ventura/Sierra

Thu 1:00pm – 2:00pm Primrose A

ArcGIS API for JavaScript: What Have You Done for Me Lately?

Derek Swingley (@derekswingley) & Jerome Yang at it again – explore the latest features, enhancements, and improvements made to ArcGIS API for JavaScript in the past year. Topics will cover simple map widgets, advanced rendering options, easier use of SVG, AMD-style coding, and simpler event management. Hopefully they cover the Web Optimizer as I did not see that listed anywhere else.

Tues 5:30pm – 6:30pm Oasis 4

Thu 2:30pm – 3:30pm Catalina/Madera

Outside the Box

These sessions hit on new technologies in the JS API, how to use Leaflet with the Esri stack, and how to integrate other javascript frameworks – from PhoneGap, to AngularJs, to Node.

Many of these sessions are in “Demo Theaters”, and thus they only fit ~80 people. I’d expect that many of these sessions will be really crowded, so try to get there early.

Using Esri Leaflet

Patrick Arlt (@patrickarlt) from the Portland R&C Center will discuss the esri-leaflet plugin and how to combine it with other leaflet plugins to create light-weight web mapping applications. For some reason this session does not come up in a search for “javascript” the online agenda. Check out the esri-leaflet project.

Mon 6pm-6:30pm Demo Theater 1 – Oasis 1

Using Web Workers and Processes to Bend Data to Your Will

Matt Priour (@mattpriour) & Lloyd Heberlie – Learn how to use the newly released Workers & Processors systems to process, modify, aggregate, or analyze your data efficiently. You will learn about these new features of ArcGIS API for JavaScript and see demonstrations of what you can do with them. This stuff is really sweet.

Tuesday 5:30-6:00pm Demo Theater 1 – Oasis 1

Accessing and Visualizing Esri GeoServices with the ArcGIS JavaScript API, D3, and Node.js

Nick Furness (@geeknixta) & Chris Helm (@cwhelm) mixin it up with all sorts of great technologies. Check out new ways to interact with various ArcGIS GeoServices APIs in the context of advanced JavaScript libraries such as D3.js and Node.js. The session will present ways to use third-party data and APIs within the ArcGIS platform and will illustrate how such data can be combined with other Esri services to make compelling maps and visualizations from a multitude of services. I suspect Koop will also make an appearance.

Wednesday 11:00am – 12:00pm Demo Theater 2 Oasis 1

Declarative Mapping Applications with Angular JS

Patrick Arlt (@patrickarlt) from the Portland R&C Center droppin the wisdome. AngularJS is a rapidly growing application framework from Google that focuses on extending HTML with a language for creating rich applications. Angular JavaScript can help you create custom HTML elements and attributes so you can rapidly develop mapping applications with reusable components. Oh, and some esri-leaflet for good measure.

Tuesday 1pm-2pm Demo Theater 2 – Oasis 1

Native Mobile Web Apps with PhoneGap and jQuery

Andy Gup (@agup) & Lloyd Heberlie – More mobile but with a twist. Learn how to configure, build, and style hybrid, cross-platform mobile GIS applications that can access GPS, cameras, SD cards, and more. This session will cover implementation patterns using PhoneGap and JQuery.

Tues 2:30pm – 3:30pm Primrose A

Wed 4:00pm – 5:00pm Pasadena/Ventura/Sierra

Working with JavaScript App Frameworks and ArcGIS API for JavaScript

This session will cover patterns for integrating JS API with other frameworks. I know they are going to cover Backbone/Marionette, and Polymer aka web components), and likely EmberJs. Lots of speakers – Derek Swingley (@derekswingley), Fred Aubry, Matt Priour (@mattpriour) and Mike Juniper (@mjuniper).

Tuesday 4pm-5pm Smoketree A-E

Testing Tools and Patterns for Javascript Mapping Applications

I’ve done a number of posts and talks on javascript unit testing, and I will be co-presenting this session along with David Spriggs (@davidspriggs) and Tom Wayson. We will be covering how to organize your code so it is testable, and how to write tests. We’ll talk about using Jasmine, Karma, Grunt, and a host of other tools. It’s late in the day and we promise to make this interesting!

Wed 5:30pm – 6:30pm Pasadena/Ventura/Sierra

Thu 1:00pm – 2:00pm Catalina/Madera