Concentric Sky

Menu

Nate Otto // June 8, 2015

Digital Badges On the Open edX Platform

Badgr An open-source badge issuing, management, and user achievement tracking platform.

EdX and Concentric Sky have collaborated to incorporate digital badging into the Open edX platform. Following an integration of the Badgr software into a badging MVP on the open edX platform, students will be able to earn badges upon completing a course and share these badges on Mozilla Backpack.

At Concentric Sky, we’re proud to be part of a growing ecosystem around Open Badges. To support the community, we’ve developed Badgr - an open source platform for issuing and managing Open Badges. And we couldn’t ask for a better launch partner for Badgr than edX. Open Badges are visual symbols of students’ accomplishments that they can take with them to display all over the web alongside badges from their other experiences.

When the Open Badges feature is activated, Open edX communicates with Badgr to create and store badge records for each student who completes courses. Open edX administrators can either configure an instance of our open source Badgr Server package or use our free hosted Badgr platform. Every badge issued through Badgr is compatible with the latest version of the Open Badges Open Badges specification, which was created by the Mozilla Foundation to help people connect their learning achievements from all different spheres of their experience. Using the open specification means the badges issued from within Badgr may be moved to or displayed within any other application that understands Open Badges. Users who have earned Open Badges anywhere else on the web can import them into Badgr and build a unified collection of their accomplishments, no matter where they were earned.

Students can store their digital badges, and then present them together with badges earned in other experiences. Open Badges come in the form of an image file they could save on their hard drives or in the cloud. Various cloud platforms, including Badgr for web and mobile, the Mozilla Backpack, and Open Badge Passport are designed to understand metadata “baked” into badge images and verify the authenticity of those badges, so that learners can reliably use these credentials when applying for jobs or demonstrating their competence.

On its part, edX is proud to be collaborating with the Open Badging community and Concentric Sky in particular to herald a fundamental change in the way society recognizes, assesses, motivates and evaluates learning. Digital badges will be an important part of digital credentials on the edX platform. After the completion of this MVP, edX will continue to work towards becoming an issuer of badges for course completion and other incremental achievements for edX courses on edx.org. There are plans to instrument the edX platform to generate badging events for student achievements and do extensive data collection around edX badge usage.

Together, edX and Concentric Sky see some exciting possibilities ahead involving awarding badges for smaller achievements within a course, representing skills and experience gained, and connecting badges in learning pathways that travel through multiple courses.

Cale Bruckner // March 23, 2015

App Association Annual Fly-In

The capitol building in Washington DC

I’m in Washington, D.C. this week to meet with elected officials and regulators about issues affecting the tech industry and our economy. 

As part of ACT | The App Association’s annual fly-in, I’m joining more than 50 small tech companies from across the country to advocate for an environment that encourages technical innovation and inspires economic growth.

Our message is simple. Small companies like Concentric Sky are creating solutions that are improving lives, creating jobs, and fueling our economy.

But, policymakers in Washington must understand issues threatening small tech companies to ensure growth continues. The concerns we will raise this week include data privacy and security, internet governance, intellectual property and patent reform, mobile health regulation, and regulatory obstacles to growth. These are important issues for which the federal government is considering taking action. 

I’m looking forward to sharing my perspective on these important issues with my elected officials and regulators.

Josh Clark // March 11, 2015

Designing for Relationship

Designing for relationship

As the digital landscape and physical space we inhabit become more integrated with one another, the role of design becomes increasingly difficult. We live in an age where people are constantly connected to devices and the trend of wearables, beacons, and connected homes will only make digital connections more pervasive. As we move into the constantly connected future, we require new design thinking. Design for utility, emotion, and even connection is no longer enough. We must begin to intentionally design for relationships.

Design thinkers like Donald Norman and Dieter Rams proposed that the major concern of design was a product’s function. For design to be functional it must allow the user to accomplish the set goal the device is purposed for. For example, a toaster that does not toast bread is more of a novelty than an effective kitchen tool. Function, efficiency and utility were, and continue to be some of the most formidable design characteristics.

In the late 2000’s another mantra took the design world by storm. In his book, Designing for Emotion, Aarron Walter advocates for Emotional Design, he says that judging something simply on its functional utility is a flawed baseline. It’s like being a chef and feeling the job is done when the food is edible. As design professionals there should be a higher level of concern than, “does it work?” Digital experiences should not simply be functional, but pleasurable. They should evoke emotional engagement and support patterns triggering positive emotional reaction. The designer does this by assuring that the product is functional, reliable and usable. More than that, designers should strive for delightful experiences.

The problem with emotional design is that more and more people are not just connected with systems, but systems are connecting users with other people. User Experience designers are not just creating software for Human-Computer Interaction (HCI), but Human-Human Interaction. We’re dreaming up digital ways for people to enhance their personal relationships. For better or worse, we live in a world where the physical spaces we inhabit are now digital spaces as well.

The problem is that people are multi-faceted and come with all sorts of relational baggage. Here’s an example. In 2003 I got my first mobile phone. My grandmother, who was a beautiful woman, would leave scathing voicemails for me when I didn’t answer my phone while at work. She figured that I had the obligation to answer my phone if it rang, and since I had my phone on me all the time, I should answer my phone at anytime. We experienced relational disruption because of the change in the digital landscape i.e., phones are no longer tethered to physical locales.

That was a decade ago. Now I live in an age where connections are everywhere. If someone wants to get in contact, they can message on Facebook and that message is sent to my computer, email, phone and tablet all at once. I can’t imagine how this access would have affected my relationship with my grandmother.

We are at a point where we have to re-evaluate our design philosophy. It’s no longer enough to design for emotion. We must design for relationships. That is to say, we must design experiences that help people relate well with one another.

Attempts to broach the topic of Relationship Design have often come under the guises of the term “social.” Conversations about social design, however, can be superficial at best. They center less around human relationships and more around connection, as if creating a pipeline from point A to point B is the same as creating meaningful human experiences. As designers we can do so much more to enhance relationships. We must elevate our craft from providing ways to connect, to facilitating healthy and meaningful relationships between people.

How do we design for relationships? How does the way in which we interact digitally create positive human relationships, like friendships, partnerships, family, co-workers, and even marriages? How does this world of ubiquitous connection make us more human, not less? Pulling from various fields, including social psychology, marriage and family counseling, and user experience, there are several areas we can focus on when designing. While I cannot attempt to establish patterns for each area in this blog post, I will acknowledge them and follow-up with each as a separate blog post in the future.

Create Healthy Boundaries

Major relationship deficiencies begin when the lines between one person and the other become blurred. The clinical term for this is enmeshment. In co-dependent relationships a person will feel like they are losing themselves to the needs and desires of another person. In the digital space, we aid enmeshment by degrading the ability for a person to create personal space, and clarify boundaries for themselves. And it is only getting worse as we become more connected.

Make Connection Management Easier

Let’s be honest: the beeps are killing us. We are pushers of interruptions, many of which are unnecessary, and unhelpful. In addition, we have relationships flying at us from 20 different angles. Email, Facebook, Twitter, Instagram, KiK, Skype, and Snapchat to name just a few. The amount of emotional energy taken from us managing connections impedes our ability to invest in relationships. Relationship design means optimizing connection management.

Support Positive Connection

We’ve all been there: some ignorant thing gets posted on the Internet and people get angry. Relationship Design forces us to focus not simply on viral content, but safe and meaningful relationship. Most arguments on the web are rooted in polarities. Relationship Design focuses on building opportunities for healthy dialogue and disagreement in hope of bringing people towards one another rather than reinforcing their differences.

Looking Forward

There are several other areas to focus on in Relationship Design, including the development of singular self-integration, creating opportunities for shared memory and reducing cognitive dissonance in relationships. In this series I’ll be focusing on digital design patterns that help build stronger relationships for each area of focus above. In the words of Dr. Ian Malcolm from Jurassic Park, “your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.” We’ve come to a watershed moment in digital design. We have to stop and ask ourselves, not only if we can build social experiences for our users, but what we should build to support excellent relationships. I look forward to sharing with you the ways we’re building relationships here at Concentric Sky.

Nate Otto // February 3, 2015

Open Badges and Micro-Credentials Technical Roadmap

Nevada Lane, @NevadaSF. This post adapted from a technical session recap, 30 January 2015 in Redwood City, CA

Open Badges and Micro-Credentials Technical Roadmap (Header Image: Nevada Lane, @NevadaSF. This post adapted from a technical session recap, 30 January 2015 in Redwood City, CA)

Last week, representing Concentric Sky and the Oregon Badge Alliance, I was an invited participant at the Educator & Workforce Micro-Credentials Summit, put on by Digital Promise with the support of the MacArthur Foundation and the Carnegie Corporation. Thanks to Digital Promise and MacArthur for extending an invitation and bringing the Oregon Badge Alliance’s perspective to Redwood City. Concentric Sky is working to define a new endorsement extension to the Open Badges specification in order to elevate the best badges and issuers within their communities of earners and consumers.

I was specifically requested to participate in a session on “The Credentials Roadmap: the Technical Side of Micro-Credentials.” In this portion of the summit, we addressed questions around micro-credentials from a technical perspective, but the questions we considered were echoed in every other session I attended, and in many of the informal conversations in small groups and around tables the rest of the day.

We talked about interoperability, about adopting a multi-stakeholder perspective, and about the importance of a long-term view. But there was one question that rightfully occupied most of our time in talking about the technical roadmap, and it was the most frequent topic of the entire summit.

The big question that will be before us for years is how the value of microcredentials will be determined. (I started talking about this last week, by beginning to investigate the concept of “currency.”) David Blake from Degreed pointed out that when we talk about interoperability today, we are talking on the level of technical compatibility, not on the level of value. While the technical validation of badges and the adoption of the common data specification is absolutely necessary to interoperability, these components alone do not ensure that micro-credentials issued by one organization can be easily valued within a different organization’s context.

The Open Badges specification creates a distributed infrastructure with a low barrier to entry for new issuers, because there are no central gatekeepers whose authorization must be gained in order to participate. With the thousands of issuers that already exist and the potential millions that may join them in coming years, it is a major challenge to compare different micro-credentials. While issuers, earners, and consumers all have a role in determining how badges are valued, “currency” is measured from the value system, context or place within a network of trust occupied by a consumer. The question a consumer might ask is “how does this credential fit into standards I respect, and why should I trust that it lives up to the promise of that alignment?” How can we guide that consumer to an answer without requiring hours or days of effort researching each new credential and its issuing organization? Erin Knight, described this issue as the “most pressing” question in her 2012 paper on badge validation, calling the technical validation measures provided by the Open Badges specification a “baseline,” from which to start addressing the more important questions about value.

This isn’t a problem exclusive to digital or micro-credentials, though it may be a more present problem in our minds because of the diversity of micro-credentials that an open standard allows. Many people are familiar with treating even college degrees as a sort of “black box.” Today, college degrees are well respected as the gold standard of educational credentials, but are impossible for employers to translate into specific skills, understanding or mindsets conferred to their earners. These existing credentials rely on large institutional gatekeepers, and the network of trust they create excludes a diversity of voices and organizations whose learning programs and credentials present value that has not been created within the traditional system.

Within the Open Badges specification community, we often consider badges as visible declarations of trust, and we are working on an endorsement specification to allow recognizers and 3rd parties to add their voices about which micro-credentials speak to the their own value systems or those of consumers who trust them. (See the Endorsement Working Group framework paper for background.) Endorsements aim to help guide earners toward credentials of value and help consumers expand their scope of possible credentials they can recognize, so they can turn those credentials into opportunities granted to earners.

It will be a heavy lift, and there’s no easy path to being able to understand large swaths of the open micro-credentials landscape. But endorsements present an opportunity to define new networks of trust, open to broad participation that can begin to show consumers which badges are trustworthy.

There is the chance that in the face of this hard problem we will recreate existing value systems reliant on large established gatekeepers, because we are unable to translate the value provided by new players into our local contexts. But there is also a chance to build up an emergent ecosystem of understanding micro-credentials issued by a diverse range of providers, layering trust relationships and endorsements. With open technology and cooperative services like BadgeRank.org, we may build up visible records of our trust relationships, and then we might see where many of the micro-credentials created by diverse issuers are situated, each from our own perspectives within a network of trust.

I am taking the lead for Concentric Sky on defining endorsement as an OBI 1.1 extension.

Nate Otto // January 26, 2015

The ‘Currency’ in your Credentials: 3 Trust Principles for Building Open Badges Software

Badges connection

Open Badges are a technology that promise to serve as portable digital credentials. Each badge symbolizes particular achievements a badge issuer recognizes about a recipient. The goal is that as a “shared language for data about achievements,” Open Badges and the accomplishments they represent can be understood by employers, colleges and other consumers of credentials.

It is badge consumers who are the arbiters of which badges are valuable. in 2015, software that uses Open Badges needs to focus more on helping badge consumers decide which badges make trustworthy claims.

As a developer working with Open Badges, I see a need for badge software to fill this value gap by ensuring that badge consumers can understand what information is being presented in a badge and how it applies to their context. An employer may see that an job applicant has earned a badge for experience with the Python programming language, but there is currently little way for this type of badge consumer to quickly understand how applicable that experience is to the job description she’s hiring for or to see whether the badge is trusted by others in her network. Without making the badge understandable from within consumers’ context, badges have no “currency.”

(CC-BY epsos.de)

Currency, as a quality of money, corresponds to whether an artifact is generally accepted. Among credentials in the US, we could say bachelor’s degrees have currency; they are often listed as a top-line requirement for a wide range of positions, and are estimated to become even more important. A Georgetown University study last year predicted that the bachelor’s degree would become a requirement for at least 63% of job openings by 2018.

In the Open Badges community, “currency” has long been a goal. The title of the ongoing MOOC for Open Badges on Blackboard’s Coursesites platform is “Badges: New Currency for Professional Credentials,” and among the working groups of the Badge Alliance, building understanding of badges among employers and other credential consumers has been a key focus.

In the fall, I participated in a roundtable webinar on badges hosted by the collaborative site Working Examples, where we referred to currency as the “holy grail” sought after by badge program designers. I followed up for a more targeted discussion with Krystal Meisel, who worked last summer with the city of Los Angeles on their City of Learning program. We distilled several factors that form both the barriers to how badges could gain currency and the opportunity points that our community, and specifically the developers at CSky, can build software around.

As I wrote for the DPD Project, issuers often try to convey the value they think badges will carry to their potential earner population, only to be met with incredulity or unease. Students are rightly skeptical of educators’ or techies’ claims that a particular credential will open up unspecified but valuable opportunities, and potential badge consumers are unwilling to promise valuable opportunities to earners of unfamiliar badges before seeing what real-world earners of those badges can do. It’s a catch-22 that undermines alternative credentials’ ability to gain currency.

UK research organization Jisc summarized the challenge based on an interview with the Badge Alliance’s Carla Casilli: “It’s clear that for badges to have currency, people need to be confident in their value.” Casilli elaborated on her own blog that badge currency arises from trust networks, and if they are to gain currency, badges “must not only engender trust, but actively work to build it.” She sketched out some features and practices of open badges systems that together build trust.

Currency Comes from Trust

A consumer’s ability to trust the claims made by a badge start with verification of its recipient and authentication of its validity. Over time, consumers can consider the reliability of a particular issuer for recognizing earners of a certain quality and can take into account the accreditation or endorsement of external organizations. These factors all add up to trust in the badge as a credible claim about the earner. But Casilli hints at the ephemerality of trust in a credential saying that “Trust is a delicate alchemical reaction based on complex and varying degrees of components, environment, perceptions, etc.”

The goal of open badges supporters isn’t to create an ecosystem of credentials that are trusted tenuously and ephemerally; it has long been argued that open badges have the potential to serve as currency. To build currency with badges, consumers need to know when they can trust a badge’s claims, and potential earners need to know whether the badges they have a chance to earn will be trusted by the employers, colleges, or partners to whom they hope to present them.

3 Trust Principles for Building Badge Software

Open Badges have the potential to unlock value for their earners, in terms of new jobs, collaborations, and opportunities. Here are three tips for software developers looking to turn this potential into cold hard currency.

1. Recognize that consumers and earners may be unfamiliar with Open Badges

Badge issuing programs may provide valuable experiences and have rock-solid assessments, but if the consumers of their badges don’t know how to access the information in badges’ metadata, there is no way for them to decide whether the program is trustworthy. Software for earners needs to help them show their badges in a wide variety of circumstances, often to consumers who may never have seen an Open Badge before. This places a lot of responsibility on badge recipients not only to explain their own accomplishments, but also to explain in high-pressure job application processes what Open Badges themselves are and how to interpret them.

This barrier to developing trust in the badges can be alleviated by embedding information about the features of Open Badges where badges are displayed. Make it clear that an issuer recognized an earner for a specific accomplishment, and plainly display the links to criteria, evidence, and the Issuer. A badge earners’ accomplishments are relevant in many different contexts and conversations, and badge displays should be tailored to the needs of those contexts. For example, a resume is the expected format for discussing credentials in the hiring process. Developers who wish to target job applications as a medium of badge sharing may seek to let earners easily embed badges into their resumes.

2. Consumers must know why they can trust an Open Badge is valid

Issuers, earners, and consumers of Open Badges all have an interest in knowing that a badge presented by its recipient is valid. And when earners show off their badges to consumers who may never have seen badges before, they need to put the ability to perform validation at those consumers’ fingertips. Software developers who write applications representing earners’ interests need to make it easy for earners to put their badges and auditable proof of those badges’ validity in front of consumers. Closely linking software that allows earners to share their accomplishments with software that allows consumers to validate them helps reduce the friction and increase trust.

Make it clear what types of validation an application performs on the badges it displays. A valid badge is one truly issued by the issuer to the recipient that the consumer expects, when that badge assertion has not expired or been revoked.

3. Leverage cooperation to make trust networks visible

The Badge Alliance is in the process of finalizing a specification for “endorsement” of Open Badges and Issuers. Just like the badges given to earners, endorsement badges are shareable declarations of trust. One of the most important questions to answer about whether a badge should be trusted is who else trusts it, and the endorsement specification will make it possible to begin answering this question. BadgeRank.org, a project by Concentric Sky, will utilize public endorsement data as it emerges to serve as a repository for information about the community’s trust in various badges and issuers.

The Open Badges community is cooperative and proactive in defining methods of cooperation. Where it is a heavy lift for one developer or company to build currency for badges, cooperating with a community to establish trust and can distribute the load. With our own proposal to the DML Trust Competition, we introduce a plan for building software that embodies these three principles, and we are happy to see that other initiatives like Badge Europe’s “Open Badge Passport” and the Open Badge Exchange project out of Dartmouth College are also focused on questions around badge currency through looking at building trust.

The Open Badges community will make great progress in 2015 building better software for issuing badges and for earners to manage and organize them. But for those badges to have currency, badge consumers need to have software that represents their interests and helps them decide which badges to trust.

Kurt Mueller // January 22, 2015

APOD for Apple Watch

apod_watch.png image

After years of rumors and hype, Apple announced the Apple Watch to much fanfare in September, 2014. Though it will not be available for purchase until sometime later in the first quarter of 2015, Apple developers can start working on apps for the Watch now, with Xcode 6.2 Beta available from developer.apple.com. I wanted to learn more about developing for Watch and about Swift, Apple’s new programming language, so I wrote a Watch app in Swift to accompany APOD, a popular iOS app we created here at Concentric Sky. (In related news, we launched APOD for Android this week.) Here’s a brief explanation of how I created the Watch app for the APOD iOS app.

Apple Watch Development Basics

First, let’s talk about current development options and restrictions for Apple Watch. As of today, all third-party Watch apps must have a corresponding iPhone app that handles most of the heavy programmatic lifting. You can’t write a Watch app that runs entirely on the Watch hardware, without a communicating iPhone app, though it’s likely that this restriction will be eased over time as the platform matures and developer tools are improved. For now, only Apple can make Watch apps that run without an iPhone and corresponding phone app. 

As explained in more detail by Apple here, Watch apps support three types of interfaces: full-app interactions, glances, and notifications. A full-app interface is required, while glances and notifications are optional. In this article I will discuss creating a full (but simple) interface, and I will address glances and notifications in subsequent articles.

A Watch App for APOD

APOD displays astronomy pictures from the APOD repository, along with titles and descriptions. The iOS APOD app has a gallery view (implemented as a UICollectionView) and a single-image view. Given the small size of the Watch screen, it made the most sense to create a single-image view first, before trying to show multiple images. However, I decided to display the single image as a table with one cell, to facilitate showing multiple images at some later point. I wanted an image and a title label to fill up the entire watch display:

Create Watch Targets

Watch apps are implemented as App Extensions to iOS apps, using the new WatchKit framework available in Xcode 6.2 beta. The first step in creating a Watch app is to make WatchKit Extension and WatchKit App targets in your iOS app, using File / New / Target and selecting the Apple Watch template:

In the options window, I choose Swift as the language for my new target, and I check the boxes for “Include Notification Scene” and “Include Glance Scene.” Checking these boxes causes Xcode to create stub Controller classes for notifications and glances, and add scenes for these to the Watch storyboard it creates.

Now in Project Navigator I see the new targets:

The WatchKit Extension is for the code that runs on the iPhone to support the Watch app, and the WatchKit App target has the storyboard and image assets file for the Watch. You can see that there are no .swift files in the WatchKit App target, which makes sense given that it is not possible for third-party developers to create code that runs directly on Watch at this point. We can define user interfaces for Watch, but the code that controls those interfaces runs on the phone. The full app is controlled by InterfaceController.swift. The NotificationController and GlanceController will be tackled later.

Configuring the Storyboard

Looking at the WatchKit App’s Interface.storyboard, there are four scenes, but I am only concerned with the Interface Controller Scene:

I would like to display a single image with a two-line label under it for the image’s title, and to make the future goal of displaying multiple images easier, I will make a table with a single row. The WatchKit table class is called WKInterfaceTable. There’s a Table object in the Objects library in Interface Builder:

Dragging a Table to the Interface Controller Scene results in:

Within the new Table, there’s a Table Row Controller. This is conceptually similar to a prototype cell in a UITableView or UICollectionView. The Table Row Controller is backed by a custom row controller class that has outlets for each of the UI objects within the table row that you wish to update when displaying the table. In this case, I want an image and a label, with the image above the label. You can see that the table row has a Group item, which is a WKInterfaceGroup. This group will contain the image and the label and determine how they are displayed. To keep things simple, Watch layouts don’t use constraints like iOS storyboards. Instead, a group can either have a horizontal or vertical layout, much like Android’s LinearLayout, and it will display contained items from left to right (for horizontal layouts) or top to bottom (for vertical layouts). I want a vertical layout with the image appearing above the title, so I adjust the group’s Attributes:

I’ve given the group a vertical layout, set Custom Insets to 0 so that the image and label will be flush up against the edges of the display, and set the Size Height to be Relative to Container, with a value of 1. This makes the group take up the entire vertical space of the container, which is its table row.

Next I add an Image object from the Objects library, inside the group:

For the image Size, I set the Width and Height to Relative to Container, with the Width filling the container (value of 1) and the Height taking up 75% of the container height (value of 0.75). This leaves enough room under the image for a two-line label:

The last step in designing the UI is to set the image and label to sensible defaults to indicate that an image is loading, for display before I set the actual APOD image and title in code. I do this by adjusting the Image attribute of the image and the Text attribute of the label (I first add a default image to my WatchKit App’s Images.xcassets file):

Adding a Custom Row Controller

Next I create a Table Row Controller class to provide IBOutlets so I can set the row’s image and label text at runtime. I create a new Swift file called APODRowController.swift:

This file extends NSObject, and has IBOutlets for the image and label defined in the storyboard, above. It also has an apodKey variable to keep track of which APOD is displayed by the row, and has a configureCell() method to pass in the key, title text, and image and set the image and title text in the displayed row.

Now that I have a custom class to back the table row, I must tell the storyboard about the custom class and make the IBOutlet connections from the class to the image and label. These are the Identity, Attributes, and Connections inspectors for the Table Row Controller after I update it:

In the Identity inspector, I set the Custom Class to my newly-created custom class, APODRowController. In the Attributes inspector, I change the name of the row controller identifier to “default,” which will be used later when I configure the row in the interface controller. And in the Connections inspector, you can see the Outlet connections I made from the row controller IBOutlets to the image and label in the storyboard.

Bringing it all Together with InterfaceController

Finally I am ready to flesh out the boilerplate InterfaceController.swift class. If this were a regular iOS UITableView controller, I would implement various methods in the UITableViewControllerDatasource and UITableViewControllerDelegate protocols to configure the number of sections and rows in the table, create each row of the table, etc. However, WatchKit tables are much simpler, and because I am only displaying a single row in my table, simpler yet. The number of rows for a Watch table must be set and each row needs to be configured up front when the table is loaded. If my table had multiple rows I would iterate through them, configuring each one, but since I have just one row I don’t have to loop at all. Here’s the entire class:

The class contains an IBOutlet for the WKInterfaceTable I created in the Watch app storyboard, which I connected in the storyboard from the Interface Controller to the table. It also has a reference to an ApodService class, defined elsewhere and beyond the scope of this blog post, that performs asynchronous loading of today’s APOD. The APODService has a single public function that takes a completion handler:

I am only displaying a single row, backed by an APODRowController object, and I keep a reference to that row called todayCell to enable configuration of it after the asynchronous loading of today’s APOD is complete.

I override the WKInterfaceController superclass function awakeWithContext() to call loadTable(). loadTable() first tells the table that it will have a single row, and that row is of type “default” (recall that I defined my row controller Identifier attribute to be “default” in the storyboard). Then I ask the tableView to give me an APODRowController object and assign it to my todayCell variable. Next, I call the APODService function to load today’s APOD and pass in a completion handler block that configures the row with the resulting key, title, and image. And finally, another call to the tableView’s setNumberOfRows function causes the tableView to redraw, displaying the updated row.

The first time you run a Watch app in the simulator, you must wait for the simulator to launch and then go to Hardware / External Displays and select one of the two Apple Watch displays (38mm or 42mm). Then you will see the Watch simulator appear. This is what I see for my simple APOD display:

Next Steps

This is a very simple example, and only scratches the surface of Watch interfaces and interactivity. Maybe I want to view previous APOD images, or get notifications on my watch when a new APOD image is available. Perhaps I want to share my favorite APOD images with friends through social media or messaging. Wouldn’t it be nice if the APOD watch app knew what I last viewed in the iOS app and could automatically show it to me on the watch? Maybe I want to look through images on the watch and then fling one to my phone to check it out on the bigger display. The possibilities for novel and useful interactions between watch and phone are endless.

We are very excited here at Concentric Sky about wearables and we can’t wait to get our hands on actual Apple Watch hardware in the next couple of months. In the meantime, we are busy exploring the developer tools and adding support to our apps in anticipation of the big launch. Check back for more Apple Watch news, as this is sure to be a hot topic.

Code from this post:

//
// APODRowController.swift
// APOD
//
// Created by Kurt Mueller on 1/18/15.
// Copyright (c) 2015 Concentric Sky, Inc. All rights reserved.
//
import Foundation
import WatchKit
class APODRowController NSObject {
  
@IBOutlet weak var apodImage WKInterfaceImage!
  @
IBOutlet weak var titleLabel WKInterfaceLabel!
  var 
apodKeyString?
  
  
func configureCell(key StringtitleStringimageUIImage{
    apodKey 
key
    titleLabel
!.setText(title)
    
apodImage!.setImage(image)
  

}
  
//
// InterfaceController.swift
// APOD WatchKit Extension
//
// Created by Kurt Mueller on 1/18/15.
// Copyright (c) 2015 Concentric Sky, Inc. All rights reserved.
//
import WatchKit
import Foundation
class InterfaceControllerWKInterfaceController {
  
  
@IBOutlet weak var tableView WKInterfaceTable!
  
  var 
apodService APODService()
  var 
todayCell APODRowController! = nil
  override func awakeWithContext
(contextAnyObject?) {
    super
.awakeWithContext(context)
  
    
loadTable()
  
}
 
  
private func loadTable() -> Void {
    tableView
.setNumberOfRows(1withRowType"default")
    
todayCell tableView.rowControllerAtIndex(0) as APODRowController
  
    apodService
.currentApodInfo { (failedtitleimagein
      self
.todayCell.configureCell(self.apodService.currentApodKey(), titletitle!, imageimage!)
      
self.tableView.setNumberOfRows(1withRowType"default")
    
}
  }

Nate Otto // January 14, 2015

OCDL Trust Ecosystem Project Announcement

Oregon Center for Digital Learning, Oregon Badge Alliance, Concentric Sky

At Concentric Sky, we are proud to serve as the technology partner for the Oregon Center for Digital Learning (OCDL). OCDL is a new non-profit organization founded to support the use of digital badges and other collaborative education technology for learning in Oregon. Together with OCDL, we have applied for a grant through HASTAC & MacArthur’s Digital Media and Learning (DML) Competition (dmlcompetition.net) - which is focused this year on trust in Connected Learning environments.

As a technology to support education, Open Badges have tremendous potential to connect learning across different contexts and to build connections between widespread educational organizations in our communities. As a founding member of the Oregon Badge Alliance, Concentric Sky hopes to further develop the technology that students and the programs they participate in need to bring learning experiences closer together and promote trust.

Along with our partners in the Oregon Badge Alliance, we plan to help jumpstart and support a cross-section of collaborative pilot programs issuing Open Badges. 12 such programs are currently under way, including partners among out-of-school learning organizations, workforce development nonprofits, and higher education institutions. We will also be building a framework for cooperation, though the Oregon Badge Alliance, supporting not only these programs that wish to issue badges, but also the learners who earn them and the representatives of employers, educators, and potential collaborators who want to understand them.

Our proposal for the DML Competition is now open for public voting through January 20. If you support our efforts to create a Trust Ecosystem around Open Badges in Oregon, we ask that you visit the DML Competition site and vote for our proposal. The people’s choice component of the competition could help us win one of three $5000 technology grants that could further support our program.

The Trust Ecosystem Project

The Trust Ecosystem Project will work with 12 pilot badge programs, employers, and Oregon Badge Alliance partners in workforce development, government, K12 and higher education to build software and a framework for connecting learning experiences with Open Badges. The project aims to close the loop between badge issuers, earners and consumers by building software that represents the interests of each stakeholder group. Each application will be released open source as well as hosted for public use. Beyond software, the Trust Ecosystem Project will organize a youth advisory council and will bootstrap a trust network around badges with pilot programs and badge-consumer partners in Oregon, yielding a variety of case studies and potentially exportable implementation models.

Samantha Kalita // January 8, 2015

Five Steps to Know Your Target Users

Couple using devices

Use the 5 W’s to Create an Excellent User Experience recommended applying the writing mnemonic: Who, What, When, Where, and Why to guide user experience design decisions. Let’s take a deeper look at these tools through a five part series. We’ll start with one of the most important considerations in design, “Who is your target?”

Why it’s important

This is a diverse world. People vary in many ways: language, culture, education, beliefs, income, etc. It is impossible to create the perfect user experience for everyone. Don’t try to and don’t worry. Not everyone will be interested in your product or service, and instead focus on your high-value users and consumers. Specialize and optimize for their needs, limitations, and expectations.

Getting started

Follow these five steps to answer any of the 5 W’s:

  1. Review - Understand your product
  2. Research - Evaluate present state
  3. Strategy - Prioritize efforts
  4. Data - Validate with information
  5. Optimize - Iterate on solutions Who is your target? Let’s start at the beginning…

Review

Understand your product

Know your product or service inside and out. Be able to articulate it in one sentence. Know its purpose, benefits and weaknesses. Focus on your core objectives.

Research

Evaluate present state

Analyze the competition.

Who is their target? How are they performing for that target? Have they missed any opportunities? Are there any consumers being underserved? Does that underserved market match your target?

Know your space.

How is your target being addressed in other markets? What works well? What doesn’t work well?

Stay current.

Are there any emerging groups or consumers who would benefit from your product or service?

Strategy

Prioritize efforts

By this point, you should have good understanding of who your target user is. Now you need to share what you’ve learned with your team and/or client. Often you will have several different types of users. Create user personas for each of them. A user persona is a profile of a fictional person who matches your target. Personas help you visualize and get into the mindset of your consumer. Give your persona a name, age, family, education, hobbies, language, ethnicity, etc. Find a headshot that fits your persona. Add any images or content that will help your team envision each persona as a real user. Make sure to include how they will use your product/service. Identify their expectations and pain points.

Once you’ve identified your key users, prioritize them. You will design for your primary users while checking that your solutions work for your secondary and tertiary users.

Data

Validate with information

Up to this point you’ve made educated guesses about your target. Now confirm that you’re on the right track by talking with them. Conduct focus groups, run online surveys, join discussion boards, etc. Choose whatever method works best for you. Make sure that you’re gathering first-hand information from your target. Gather quantitative (e.g. “X% of participants preferred Option A”) and qualitative data (e.g. “I like Option A because I can use it while I’m on the go.”)

Optimize

Iterate on solutions

Repeat the process and re-evaluate with the data you collected. Continue to iterate over the life of your product and service. It’s very likely that you will see new competitors or have potential niche market consumers.

Although it may sound simple, the most important thing you can do is to keep your eyes and ears open to your users. Be proactive. Be aware. Listen to your users. The better you understand your users the better your product/service will be.

What process have you used to identify your target?

Nate Otto // October 21, 2014

Introduction to Open Badges

Badges

Hello! Allow me to introduce myself - I’m a new face on this blog, and a new developer at Concentric Sky working on our web applications that deal in Open Badges. For the last year, I have been coordinating a team at Indiana University studying 30 projects that designed and implemented programs to issue open digital badges for learning. The findings from that project are being published now. We found that overall program success often came to the programs that had the best understanding of how all the moving parts of their design fit together, not necessarily those with the most ambitious plans or the best technology.

I’m proud to be joining Concentric Sky, because the team here really understands the potential of Open Badges. With our years of combined experience in EdTech, we’re in an excellent position to help organizations build programs that issue badges - and we can provide much of the software that will help each of their participants earn badges, manage their credentials, and most importantly, use them to unlock future opportunities.

What are Open Badges?

Open Badges are digital images that symbolize particular achievements, benchmarks, or experience. Unlike many of the digital badge systems that have sprung up in videogames and online, Open Badges are a shared language for data about these achievements. They are designed to break down the barriers between different systems that understand only their own sets of familiar credentials.

Open Badges directly embed data about the achievement they represent inside the image. This data stays with the image as it is moved and shared. Using this technology is a way for badge earners to bring together verifiable representation of qualifications, skills, and experience to tell a unified story about their accomplishments, no matter whether those badges were issued by a single education provider or by a wide range of issuers.

The metadata standard was originally designed by a team at the Mozilla Foundation, and now many organizations, including Concentric Sky, are contributing to advancing the standard and growing the ecosystem of organizations and people who can act as badge issuers, earners, and consumers. Using this common standard for embedding metadata about achievements into badges helps consumers understand what badged accomplishments mean, and in addition, also enables automatic verification of authenticity. This means admissions offices, hiring managers, and others who examine credentials can shift their attention from calling phone number after phone number to verify qualifications, to determining whether or not those qualifications help represent someone who is a good fit for their mission and goals.

The Potential of an Open Badge Ecosystem

The well-recognized credentials of today’s education system, from the high school diploma to the PhD, are familiar to the public and are at home in resumes and applications for all sorts of positions. But there is also a wide range of learning providers operating outside the accredited education environment that offer youth and adults learning experiences that represent important components of people’s educational journeys. These providers often award their own paper certificates for the various accomplishments that they measure, but the public has little to no familiarity with these credentials, and so they are often not represented as prominently as the traditional components of an individual’s experience. For learning providers, Open Badges represent an opportunity for organizations both inside and outside the formal education sector to contribute richer information about badge earners’ experiences in a way that can help them better represent themselves in conversations about their qualifications.

We believe all stakeholders in the education ecosystem could be better served by providing and accessing more detailed information about achievements, especially as the need to connect learning across different environments, formal and informal, from a young age through learners’ careers, increases.

Concentric Sky is incubating multiple projects to serve all sides of the Open Badges ecosystem. From making enterprise-level issuing tools available to even the smallest learning program, to the mobile Badgr badge repository available for iOS and Android, to the BadgeRank website that aims to begin crowdsourcing information about the value of badges, the idea is to make it possible for many issuers and earners to better tell their own stories where it counts, and for their audiences to understand them.

We’re excited to participate in growing the ecosystem and helping learners access and receive the benefits of participating in a wide variety of learning experiences.

Samantha Kalita // September 23, 2014

Use the 5 W’s to Create an Excellent User Experience

Woman using device

The 5 W’s—the fundamental writing mnemonic we learned in grade school—can help us clearly communicate a story to our audience. They remind us to tell the key points of a story: Who, What, When, Where, and Why. This mnemonic method can also be used as a tool to guide successful user experience design.

Who

Who is your target?

One of the most important considerations in any design is to know who you are designing for. Think about that user’s needs, limitations, and expectations for all aspects of the experience.

Consider what language you want to use.

For example, if you’re designing a learning tool for a student, make sure the vocabulary matches their reading level. Also consider what tone and voice you want to use. Do you want to be authoritative or chummy? If you have a culturally diverse customer base, it may be important to offer multi-lingual experiences.

Empathize with your user.

Do they have any disabilities (e.g., color blindness, poor hearing, poor vision, etc.)? What information and in what format (e.g., text, image, video, audio) do they want? Always try to make their lives easier.

Who does this well:

KinderTEK

Their target is schools which means they have optimized their designs for both students and educators.

What

What is your goal?

This is the “Raison d’être” as the French say. It’s the reason for existence. Keep this foremost in your thoughts. Let it dictate every decision you make. After all, it is the whole point of why you’re creating this experience.

State your goals.

Make sure everyone is clear about those goals from the beginning. Know what your goals are before you start designing. When designing, constantly ask yourself, “Is this helping the user achieve their goal?” If the answer is no, consider excluding it from your design.

Measure your progress.

Don’t forget to establish metrics to evaluate how well you are doing. Create measurable and achievable goals. If your goal is to increase email subscriptions, state that you want to increase conversions by a particular percentage within a window of time.

Who does this well:

Silicon Shire

Their goal is to promote technology businesses in the Eugene-Springfield metropolitan area.

When

When should you show CTA’s?

Make call to actions (CTA’s) relevant and easy. Provide users with the context and information that they need to take action. Don’t make conversions a battle. Guide them through the actions they need to take.

Make it relevant.

Be context-sensitive. Both the content and the action should be aligned. For example, if you mention that they can reach out to customer service if they have additional questions, enable them to contact customer service directly.

Be strategic.

Often you’ll have several goals for users. Prioritize those goals and be selective about when you present them to users. Don’t list all the possible actions that a user can take at one time. Distribute them across your experience. Give the highest priority action the most visibility. Provide context and support for taking action. Make sure action language is clear and concise.

Who does this well:

Hatch Canada

Since their primary goal is to have parents sign their children up for after-school programming instruction, this appears as the dominant CTA above the fold. Their secondary goal is to have users contact the instructor. This appears below the primary CTA and in a more modest styling.

Where

What platform makes sense?

To answer this question, you need to have previously answered “Who” and “What.” It’s important to understand your users’ behavior as well as the benefits and limitations to the networks and platforms they’re on.

Know where your users are and aren’t.

You have this great social networking plan. It will be the next viral sensation. Everyone will be tweeting about it for months. Except your target isn’t on Twitter, they’re on LinkedIn. Make sure to focus your energy on platforms where you get the most bang for your buck.

Know how to best leverage platforms.

Every thing isn’t designed to do everything. Don’t design a mobile application if all you really need is a mobile optimized website. Pinterest is great for image sharing. Twitter is great for short thoughts. Understand each platform’s strengths and weaknesses and decide which matches your goals best.

Who does this well:

Libations

Since this service helps users track their favorite drinks, it’s important that users can access it on the go, wherever they are. They chose to create a mobile app which is ideal for on-the-go access.

Why

Why should users act?

Users are savvy. It’s important to demonstrate how you will benefit their lives both from a logical and emotional point of view. There’s a lot of fish in the sea, so don’t get lost in the current. You might have the best service or product, but if you can’t communicate that to your user, you will have lost them.

Address their problems.

Life is complicated. Make it better. Show them that you understand their problems and how you’ll make it better.

Show your value.

This can be done by illustrating cost savings, sharing third party testimonials, displaying comparison charts, etc. Whatever approach you take, make sure you demonstrate your worth.

Who does this well:

Mama Seeds

To establish their credibility as pregnancy experts, they identified well-known pregnancy resources who have leveraged their content.

At its core an excellent user experience is achieved by understanding your users and making their lives easier.

What mnemonics have you used in your design process?

Kurt Mueller // June 4, 2014

Creating a Multipage PDF Document from UIViews in iOS

blog-image-1.jpg image

We created an educational children’s app for iPad that includes a photo scrapbook. Students earn stickers and animal photos for the scrapbook as they use the app, and they are given a few minutes to interact with the scrapbook at the end of each learning session. Since the scrapbook serves as both an indicator of student progress and a fun reward for student effort, we provide an in-app mechanism to export the scrapbook to PDF format so that students will have something tangible to take away from their time with the app (in addition, of course, to increased knowledge and understanding!). In this post, we’ll explore the steps necessary to take a set of UIViews, each representing an individual scrapbook page, and create a multipage PDF document that can be emailed or printed directly from iOS.

Scrapbook Overview

A typical scrapbook contains many pages, each displayed side by side with another to look like a physical book. Here’s an example of two facing pages with photos and stickers, as they appear to students in the app:

Each page is represented by a class called ScrapbookPage, and contains one or two photos and an arbitrary number of stickers (students are free to move stickers around in the scrapbook). We present two ScrapbookPages side by side using a UIPageViewController, which provides really nice page turning interactivity.

We want our PDF output to accurately represent the in-app scrapbook, so we will show two ScrapbookPages side by side on each page of the PDF output. This means that the PDF document needs to be in landscape orientation. We also need to accommodate the possibility of an odd number of ScrapbookPages, in which case the last page of our PDF output will display a single ScrapbookPage rather than two.

Generating PDF Data from a UIView Subclass

Before we tackle the problem of making a multipage document from many ScrapbookPages, let’s start with the more basic task of turning a single UIView subclass into PDF data. In a later section we’ll expand on the basic task by adding logic to loop through an array of ScrapbookPages and create facing page views.

In this basic example, we’ll create a UIView instance that takes up the full screen in landscape orientation, with size 1024x768:

UIViewtestView [[UIView alloc] initWithFrame:CGRectMake(0.0f0.0f1024.0f768.0f)]

Next we create a mutable data object to hold our PDF-formatted output data:

NSMutableDatapdfData [NSMutableData data]

Then we create a PDF-based graphics context, with our mutable data object as the target:

UIGraphicsBeginPDFContextToData(pdfDataCGRectMake(0.0f0.0f792.0f612.0f), nil); 

Note that 792x612 is the size in pixels of a standard 8.5x11” page at 72dpi, in landscape mode. We are passing nil as the last parameter, which could instead be an NSDictionary with additional info for the generated PDF output, such as author name.

Then we mark the beginning of a new page in the PDF output and get the CGContextRef for our PDF drawing:

UIGraphicsBeginPDFPage();
CGContextRef pdfContext UIGraphicsGetCurrentContext(); 

Remember that our UIView has size 1024x768, and our PDF page has size 792x612. To make sure that all of the UIView is visible in the PDF output, we must scale the context appropriately. 792 / 1023 = 0.733, which is our scaling factor:

CGContextScaleCTM(pdfContext0.773f0.773f); 

Now that all setup is done, we finally get to the exciting part: rendering the UIView’s layer into the PDF context:

[testView.layer renderInContext:pdfContext]

To finish up, we end the PDF context:

UIGraphicsEndPDFContext(); 

At this point, we have a an NSData object (pdfData) that contains a PDF representation of our UIView (testView). Here’s all the code from this example together:

UIViewtestView [[UIView alloc] initWithFrame:CGRectMake(0.0f0.0f1024.0f768.0f)];
NSMutableDatapdfData [NSMutableData data];
UIGraphicsBeginPDFContextToData(pdfDataCGRectMake(0.0f0.0f792.0f612.0f), nil);
UIGraphicsBeginPDFPage();
CGContextRef pdfContext UIGraphicsGetCurrentContext();
CGContextScaleCTM(pdfContext0.773f0.773f);
[testView.layer renderInContext:pdfContext];
UIGraphicsEndPDFContext(); 

Creating a Single Full-screen UIView Subclass Instance from Two ScrapbookPages

In the previous section, we learned how to render a full-screen UIView into a PDF context. We omitted something important, however: the UIView was empty, with nothing in it. That will make for a pretty boring PDF. In this section we’ll see how to add two facing ScrapbookPages to a UIView subclass.

In the app, each ScrapbookPage displayed onscreen has a size of 475x577. Two of these fit side by side on a full-screen landscape page with space between them and a border around the outside, like so:

As mentioned previously, the scrapbook facing pages view in the app is controlled by a UIPageViewController. This provides a very polished and natural simulation of an actual book, with realistic page turning animations as you drag your finger over the pages. This is great for interactive use of the scrapbook, but it’s not necessary for rendering of static pages, and in fact would probably add a lot of CPU and memory overhead to the process. Instead of using a UIPageViewController for the PDF rendering process, we created a simple UIView subclass called ScrapbookOpposingPagesPrintingView. This class manages layout of two ScrapbookPages, on top of a background UIImageView representing an open book.

How do the individual ScrapbookPages get laid out? Before we proceed, we need to introduce another class: ScrapbookPagePrintingView. This UIView subclass takes in its init method a ScrapbookPage object, which is a simple NSObject subclass describing the photos and stickers on a page, and does the actual layout of the photos and stickers described in the ScrapbookPage object by creating UIImageViews for each photo and sticker. The ScrapbookPagePrintingView adds these UIImageViews as subviews to itself. We will not describe this class further, as its internal operations are unimportant to the present discussion.

Here’s what we’ll see in the code below: we have two ScrapbookPage objects, and we create ScrapbookPagePrintingView objects from each one. We add two of these ScrapbookPagePrintingViews to our ScrapbookOpposingPagesPrintingView, which is then ready for rendering to PDF. This diagram shows the relationship between ScrapbookPagePrintingViews and an enclosing ScrapbookOpposingPagesPrintingView:

This is ScrapbookOpposingPagesPrintingView’s interface (.h) file:

@class ScrapbookPage;
@interface 
ScrapbookOpposingPagesPrintingView UIView
- (void)showLeftScrapbookPage:(ScrapbookPage*)leftPage rightScrapbookPage:(ScrapbookPage*)rightPage;
@
end 

And here is ScrapbookOpposingPagesPrintingView’s implementation (.m) file:

#import "ScrapbookOpposingPagesPrintingView.h"
#import "ScrapbookPagePrintingView.h"
#import "ScrapbookPage.h"

static const CGRect ScrapbookOpposingPagesLeftPageFrame {36.0f92.0f475.0f577.0f};
static const 
CGRect ScrapbookOpposingPagesRightPageFrame {514.0f92.0f475.0f577.0f};

@
implementation ScrapbookOpposingPagesPrintingView 

- (id)init {
    CGRect fullScreenFrame 
CGRectMake(0.0f0.0f1024.0f768.0f);
    
self [super initWithFrame:fullScreenFrame];
    if (
self{
        self
.backgroundColor [UIColor whiteColor];
        
UIImageViewbackgroundImageView [[UIImageView alloc] initWithFrame:myFrame];
        
[backgroundImageView setBackgroundColor:[UIColor clearColor]];
        
[backgroundImageView setContentMode:UIViewContentModeCenter];
        
[backgroundImageView setImage:
          
[UIImage imageNamed:@"scrapbook-print-bg"]];
        
[self addSubview:backgroundImageView];
    
}
    
return self;
}

- (void)showLeftScrapbookPage:(ScrapbookPage*)leftPage rightScrapbookPage:(ScrapbookPage*)rightPage {
    
if (leftPage != nil{
        ScrapbookPagePrintingView
leftPageView =
          
[[ScrapbookPagePrintingView alloc]
          initWithFrame
:ScrapbookOpposingPagesLeftPageFrame
          andScrapbookPage
:leftPage];
        
[self addSubview:leftPageView];
    
}
    
    
if (rightPage != nil{
        ScrapbookPagePrintingView
rightPageView =
          
[[ScrapbookPagePrintingView alloc]
          initWithFrame
:ScrapbookOpposingPagesRightPageFrame
          andScrapbookPage
:rightPage];
        
[self addSubview:rightPageView];
    
}

In its init method, we call [super initWithFrame:] and pass a full-screen landscape orientation frame. Then we add a UIImageView containing the image representing an open book, which will be behind the two facing pages:

In the showLeftScrapbookPage:rightScrapbookPage: method, we accept one or two ScrapbookPage objects and create ScrapbookPagePrintingViews from them, then add them as subviews to self. Note that we pass different statically-defined CGRect frames to the ScrapbookPagePrintingView init method for left and right pages, to make sure that the resulting ScrapbookPagePrintingViews show up on the left or right side when added as subviews to self. These frames have different origins for left and right, but the same size, since each ScrapbookPagePrintingView is the same size.

Instantiating a ScrapbookOpposingPagesPrintingView and passing one or two ScrapbookPages to showLeftScrapbookPage:rightScrapbookPage: results in a full-screen landscape orientation ScrapbookOpposingPagesPrintingView with an open book background image and two ScrapbookPages:

Putting it All Together

Now that we know how to generate PDF data from a single UIView or UIView subclass, and we know how to create a ScrapbookOpposingPagesPrintingView class containing two facing ScrapbookPages, we will add logic to iterate over an array of ScrapbookPages to create as many ScrapbookOpposingPagesPrintingViews as we need, noting that the last ScrapbookOpposingPagesPrintingView may only have a single ScrapbookPage on it if we have an odd number of ScrapbookPages.

To accomplish this, we need two methods: one that prepares the pdfData mutable data object to hold the PDF output and iterates through the ScrapbookPages, and one that creates and renders a ScrapbookOpposingPagesPrintingView for each pair of pages. Here’s the first method:

- (NSData*)scrapbookPdfDataForScrapbookPages:(NSArray*)scrapbookPages {
    NSMutableData
pdfData [NSMutableData data];
    
UIGraphicsBeginPDFContextToData(pdfDataCGRectMake(0.0f0.0f792.0f612.0f), nil);
    
    if (
scrapbookPages.count 0{
        NSUInteger pageIndex 
0;
        do 
{
            ScrapbookOpposingPagesPrintingView
printingView =
              
[[ScrapbookOpposingPagesPrintingView alloc] init];

            
ScrapbookPageleftPage scrapbookPages[pageIndex];
            
// only include right page if it exists
            
ScrapbookPagerightPage =
              
pageIndex scrapbookPages.count ?
              
scrapbookPages[pageIndex 1] :
              
nil;
            
[printingView
              showLeftScrapbookPage
:leftPage
              rightScrapbookPage
:rightPage];
            
[self addPrintingViewPDF:printingView];
            
// take two pages at a time
            
pageIndex += 2;
        
}
        
while (pageIndex scrapbookPages.count);
    
}
    
    UIGraphicsEndPDFContext
();        
    
    return 
pdfData;

And the method that performs the rendering to PDF for each ScrapbookOpposingPagesPrintingView, called by the method above, is:

- (void)addPrintingViewPDF:(UIView*)printingView {
    
// Mark the beginning of a new page.
    
UIGraphicsBeginPDFPage();
    
CGContextRef pdfContext UIGraphicsGetCurrentContext();
    
    
// Scale down from 1024x768 to fit paper output (792x612; 792/1024 = 0.773)
    
CGContextScaleCTM(pdfContext0.773f0.773f);
    
[printingView.layer renderInContext:pdfContext];

Calling scrapbookPdfDataForScrapbookPages: with an array of ScrapbookPages results in an NSData object containing a PDF representation of the entire scrapbook, which can be used in many ways. In the app, we enable printing of the PDF output directly from the app via AirPrint, and also emailing it as a file attachment. Perhaps we’ll cover those two mechanisms in another blog post.

Daniel Wilson // February 27, 2014

A Custom Django Widget

blog-image-23.jpg image

Writing a custom admin widget can be a little tricky, due to the way that form data is handled. In order to minimize your trouble, I would highly recommend extending an existing widget if at all possible. It may save you a lot of trouble, since Django has a lot of custom labeling and logic to create, populate, validate, and submit admin forms. Once you have a functional baseline, any custom behavior can overwrite the defaults.

This example comes from a project with a purchase request model. The admin site had a model changeform, which contained information about the requester, the requested item, and so on. We also had a textfield which was a human-readable phrase describing the purchase request. This field needed to have a button near it which would pull information from other parts of the form, and could then be manually edited and submitted to the database normally.

In this example, I will walk you through the creation of a custom formfield widget and how to get it properly plugged into the admin. The widget itself is just a textarea with a button, so all we need to do is take the HTML output from the textarea and append it. The render() method accepts the currently instantiated widget object (self), the name of the form field using the widget (name), the current contents of the html textarea (value), and any attributes passed to it by the widget class; it is responsible for returning a valid HTML string describing the widget, so that is what we will construct.

One thing you need to know is that each purchase request can be associated with an asset. We’re going to want to know which asset the purchase request is linked to, so we start off by creating a purchase request/asset dictionary. Thus:

# In apps/purchaserequest/widgets.py
class PRNotesWidget(forms.widgets.Textarea):

 
def render(selfnamevalueattrs=None):
   
# Build a dictionary linking purchase requests
   # with their corresponding assets
   
pr_asset_dict {int(asset.purchaserequest_id):
                    
int(asset.asset_number) for asset in
                    Asset
.objects.all()
                    .
exclude(purchaserequest_id=None)}
        
   
# Start with the textarea; and wrap it in a script
   # containing the logic to populate it, and the
   # button to trigger the script.
   
html super(PRNotesWidgetself).render(namevalue,
                                            
attrs)
   
html """
     <script type="
text/javascript">
       var populatePRNotes = function() {
         # Use jQuery to select the fields that will
         # populate this field
         var qty = document
                        .getElementById('id_qty').value;
         var item = document
                    .getElementById('id_name').value;
         var who = document
                   .getElementById('id_who').value;
                    
         # Get the id of the purchase request
         # from the form
         var pr_id = document
           .getElementsByClassName('#field-request_id')
           .value;
                    
         # Careful here: we're ending the string,
         # inserting the dictionary we built earlier,
         # and then continuing our string.
         var pr_asset_dict = """ 
+
                             
str(pr_asset_dict) + """;
                    
         # Now access the dictionary using the purchase
         # request id as a key to get the corresponding
         # asset (if there is one)
         var pr_asset = pr_asset_dict[pr_id];
                    
         # Build the text to display in the form field.
         var display_text = qty + ' ' + item +
                            ' for ' + who
         if (pr_asset) {
           display_text += ' (Asset #' + pr_asset + ')'
         }
         document.getElementById('id_accounting_memo')
                 .innerHTML = display_text;
       } 
     </script>
     """ 
html """
     # This button will trigger the script's function
     # and fill in the field.
     <button type="
button" onclick="populatePRNotes()">
       Create PR Notes
     </button>
     """
;
        
   
# Since we are using string concatenation, we need to
   # mark it as safe in order for it to be treated as
   # html code.
   
return mark_safe(html); 

Now that our widget is defined, all we need to do is link it to an admin field. We do this by setting the formfield widget to PRNotesWidget like so:

# In apps/purchaserequest/fields.py
from purchaserequest.widgets import PRNotesWidget

class PRNotesField(models.TextField):

  
def formfield(self, **kwargs):
    
kwargs['widget'PRNotesWidget
    
return super(PRNotesFieldself).formfield(**kwargs

The field needs to be explicitly specified in a form:

# In apps/purchaserequest/forms.py
from purchaserequest.fields import PRNotesField

class PRAdminForm(forms.ModelForm): 
  
# The form for the purchase request model should
  # use our custom field
  
accounting_notes PRNotesField() 

And then, of course, we need to make sure we’re using that form in the admin:

# In apps/purchaserequest/admin.py
from purchaserequest.forms import PRAdminForm

class PRAdmin(admin.ModelAdmin):
  
# The purchase request admin should be using the
  # custom admin form
  
form PRAdminForm 

You’ll note that I’ve split the admin, form, field, and widget each into files with their respective names. This is only really necessary if you have a large project with lots of custom widgets and fields. However, this structure is preferable both for being prepared for the future, as well as to understand the hierarchy and flow of the app.

This was my first attempt at a custom widget, but a number of improvements could easily be made from here. For example, it is not necessary to create a custom field as I did, since Django provides a shortcut in a form’s Meta class to define a “widgets” dictionary with field names as keys and widgets as values. You’ll also notice the fact that I make a query to the database with pr_asset_dict, and dump the entire dictionary to Javascript. A better way to do this would be to make an AJAX call to the database and retrieve only the asset that I want. While the example presented here might be the most easily understood implementation, there is always room for optimization.

Daniel Wilson // January 2, 2014

Data Migrations with South and Django

blog-image-32.jpg image

The workflow for a data migration in Django with South migrations is relatively simple, and fairly well-documented. If you have a model that you want to modify, you’ll want to

  1. define your new fields and create a schemamigration;
  2. create a blank migration and access the ORM dictionary to write your data migration, which moves the data from the old fields to the new; and
  3. remove the old fields and create another schemamigration to say goodbye to those unsalted passwords forever.

The workflow is simple enough to understand, but if you want to do anything more complicated than break your names into first_name and last_name, you’ll need some more tools. Recently, I ran into a situation where I needed to condense two entire models into a single super-model that contained all fields from both of the originals. To illustrate, I will first give a simple, silly example. If you are neither of these things, feel free to skip to the latter section in which I lay out how to write an epic-level data migration.

Silly Example: Hybridizing Animals

First, lay out the models. Ducks and beavers each get a name, a tail type, and a boolean for their bill (by default, beavers don’t have one). For simplicity’s sake, put both of these in an “animals” app within models.py

class Duck(models.model):
  
name models.CharField(max_length=32)
  
weight models.DecimalField()
  
tail models.CharField(default="feathered"max_length=32)
  
bill models.BooleanField(default=True)

class 
Beaver(models.model):
  
name models.CharField(max_length=32
  
weight models.DecimalField()
  
tail models.CharField(default="broad and featherless"max_length=32)
  
bill models.BooleanField(default=False

With that taken care of, run the initial migration

./manage.py schemamigration --initial animals 

Then, create some animals in the database. Registering the app in the Django admin makes creating animals easy.

Time to get hybridizing! The three steps are schemamigration, datamigration, schemamigration, so start by creating the hybrid animal class. This goes in animals/models.py with the other two. Give it the same fields as before, but do not specify defaults because these need to come from the inherited classes, and they’re all required by default anyway.

class Platypus(models.model):
  
name models.CharField(max_length=65)
  
weight models.DecimalField()
  
tail models.CharField(max_length=32)
  
bill models.BooleanField() 

New model added; run the schemamigration:

./manage.py schemamigration animals --auto 

To set up the datamigration, begin by creating an empty migration. Don’t forget to give it a name:

./manage.py datamigration animals hybridize_ducks_and_beavers 

Inside the migration file, write a forwards function:

def forwards(selform):
  for 
duck in orm['animals.duck'].objects.all():
    
beaver orm[‘animals.beaver’].objects.get(id=duck.id)
    
form animals.models import Platypus
    platypus 
Platypus (
      
name duck.name -“ beaver.name
      weight 
= (duck.weight beaver.weight) / 2
      tail 
beaver.tail
      bill 
duck.bill
    

A couple of things to note here:

  1. The script loops through every duck in the list of ducks. It matches every duck with a beaver by grabbing the beaver that has the same id as each duck. (It assumes, of course, that there is a matching beaver for each duck.)
  2. Since there are not currently any Platypuses registered, they do not appear in the ORM. Rather than referencing existing models – as done with ducks and beavers – the script needs to import Platypus from the animals models.py file, and create a new instance of the model each time it iterates through the loop.

The new platypuses have hyphenated names. Their weights are an average of their parents, and they get their tails and bills from their beaver and duck parents, respectively:

The genetic experimentation is complete, all that is left is to remove the old models. In animals/models.py, delete all the duck and beaver code, and run

./manage.py schemamigration animals --auto 

This will delete the old tables, leaving only platypuses!

Serious Example: Merging Django’s auth.user Model With a Custom User Model

Django’s default user model automatically provides a variety of commonly-used fields, such as username, email, password, is_staff, last_login, and so on. With the release of Django 1.5, it is now relatively simple to write a user model which encapsulates these fields as well as any other custom information that needs to stored about the user. However, prior to this, it was necessary to create a separate, custom table to contain any extra information, and link it via a one-to-one relationship to the auth.user table. This is the situation I was confronted with on a recent project, and when the time came to upgrade the project to Django 1.5, it made sense to combine the two user tables into one larger table to simplify storage and referencing. The procedure helped solidify my understanding of Django user models as well as South migrations, and I hope it helps you as well!

To begin, the auth_user table contained the columns: id, username, first_name, last_name, email, password, is_staff, is_active, is_superuser, last_login, and date_joined. Additionally, the auth_user model had many-to-many relationships with tables called “groups” and “user_permissions”. The custom user model was in an app called members. Thus, the members_user model contained the columns: user_ptr_id (the link to auth_user), user_type, birthdate, bio, email_prefs, hide_onboarding, cancel_state, cancel_reason, and photo. Additionally, the members_user model had three many-to-many fields: each user had favorite_comments, favorite_journal_entries, and favorite_videos.

Ultimately, I wanted all of this data to be encapsulated in a new model called “Profile” in the members app. First, I created the new Profile class in my members/models.py file. It was a duplicate of the existing members_user model, except that it also inherited from django.contrib.auth.models.AbstractUser. This is the mixin used by the regular auth.user model, and granted my Profile model all of the usual user fields (password, username, etc.). Then, I ran

./manage.py schemamigration —auto 

to generate the blank model, ready to be populated.

The tricky part is the data migration. In order to coerce the data into a single table, it is necessary to loop through each auth_user; and each time:

  1. create a new profile object,
  2. insert the auth_user data,
  3. create new many-to-many tables from auth_user,
  4. insert the members_user data, and
  5. create new many-to-many tables from members_user.

First, run

./manage.py datamigration members migrate_userdata_to_profiledata 

Next, the data migration forwards function:

class Migration(DataMigration):

  
def forwards(selform):
    
"Write your forwards methods here."
    
# Note: Remember to use orm['appname.ModelName']
    # rather than "from appname.models..."

    
for authuser in orm['auth.user'].objects.all():

      
# Create a new members.Profile for every existing auth.User. I 
      # needed to import Profile in order to create new instances of it.
      
from members.models import Profile
      memberprofile 
Profile (
        
id=authuser.id,
        
password=authuser.password,
        
last_login=authuser.last_login,
        
is_superuser=authuser.is_superuser,
        
username=authuser.username,
        
first_name=authuser.first_name,
        
last_name=authuser.last_name,
        
email=authuser.email,
        
is_staff=authuser.is_staff,
        
is_active=authuser.is_active,
        
date_joined=authuser.date_joined
      
)

        
# Transfer the many-to-many tables from auth_user
        
for group in authuser.groups.all():
          
memberprofile.groups.add(group.id)
        for 
permission in authuser.user_permissions.all():
          
memberprofile.user_permissions.add(permission.id)

        try:
          
# If there is an associated members.User,
          # add those fields to the members.Profile
          
memberuser orm['members.user'].objects.get(user_ptr_id=authuser.id)
          
memberprofile.user_type=memberuser.user_type
          memberprofile
.birthdate=memberuser.birthdate
          memberprofile
.bio=memberuser.bio
          memberprofile
.email_prefs=memberuser.email_prefs
          memberprofile
.hide_onboarding=memberuser.hide_onboarding
          memberprofile
.cancel_state=memberuser.cancel_state
          memberprofile
.cancel_reason=memberuser.cancel_reason
          memberprofile
.photo=memberuser.photo

          
# Transfer the m2m fields from user to profile
          
for comment in memberuser.favorite_comments.all():
            
memberprofile.favorite_comments.add(comment.id)
          for 
journalentry in memberuser.favorite_journal_entries.all():
            
memberprofile.favorite_journal_entries.add(journalentry.id)
          for 
video in memberuser.favorite_videos.all():
            
memberprofile.favorite_videos.add(video.id)
            
        
# In case there is a problem getting the related
        # members_user model, I used pdb to diagnose the issue.
        
except orm['members.user'].DoesNotExist:
          
pass
        except Exception 
as e:
          
import pdbpdb.set_trace()

        
# All done! Save, and move on to the next user.
        
memberprofile.save() 

After performing a data migration this big, it’s important to check the actual data for consistency. Indeed, as I wrote this function, I performed the data migration, identified an error, and deleted the table data and migration many times.

The last step was to delete the old members.user model and run

./manage.py schemamigration members --auto 

Transition complete; all user data is in a single table!

Concentric Sky uses Django as one of our core technologies. With Django, we build backends for mobile applications, craft custom web applications and deploy data-driven websites. We’ve written a number of articles on Django, use the tags to find more.

Arion Sprague // July 5, 2013

Python’s Hidden New

blog-image-4.jpg image

__new__ is one of the most easily abused features in Python. It’s obscure, riddled with pitfalls, and almost every use case I’ve found for it has been better served by another of Python’s many tools. However, when you do need __new__, it’s incredibly powerful and invaluable to understand.

The predominant use case for __new__ is in metaclasses. Metaclasses are complex enough to merit their own article, so I don’t touch on them here. If you already understand metaclasses, great. If not, don’t worry; understanding how Python creates objects is valuable regardless.

Constructors

With the proliferation of class-based languages, constructors are likely the most popular method for instantiating objects.

Java

class StandardClass {
    
private int x;
    public 
StandardClass() {
        this
.5;
    
}
    
    
public int getX() {
        
return this.x;
    
}

Python

class StandardClass(object):
    
def __init__(selfx):
        
self.

Even JavaScript, a protypical language, has object constructors via the new keyword.

function StandardClass(x{
    this
.x;
}

var standard = new StandardClass(5);
alert(standard.== 5); 

Newer is Better

In Python, as well as many other languages, there are two steps to object instantiation:

The New Step

Before you can access an object, it must first be created. This is not the constructor. In the above examples, we use this or self to reference an object in the constructor; the object had already been created by then. The New Step creates the object before it is passed to the constructor. This generally involves allocating space in memory and/or whatever language specific actions newing-up an object requires.

The Constructor Step

Here, the newed-up object is passed to the constructor. In Python, this is when __init__ is called.

Python Object Creation

This is the normal way to instantiate a StandardClass object:

standard StandardClass(5)
standard.== 

StandardClass(5) is the normal instance creation syntax for Python. It performs the New Step followed by the Constructor Step for us. Python also allows us to deconstruct this process:

# New Step
newed_up_standard object.__new__(StandardClass)
type(newed_up_standardis StandardClass
hasattr
(newed_up_standard,'x'is False

# Constructor Step
StandardClass.__init__(newed_up_standard5)
newed_up_standard.== 

object.__new__ is the default New Step for object instantiation. It’s what creates an instance from a class. This happens implicitly as the first part of StandardClass(5).

Notice, x is not set until after newed_up_standard is run through __init__. This is because object.__new__ doesn’t call __init__. They are disparate functions. If we wanted to perform checks on newed_up_standard or manipulate it before the constructor is run, we could. However, explicitely calling the New Step followed by Constructor Step is neither clean nor scalable. Fortunately, there is an easy way.

Controlling New with __new__

Python allows us to override the New Step of any object via the __new__ magic method.

class NewedBaseCheck(object):
    
def __new__(cls):
        
obj super(NewedBaseCheck,cls).__new__(cls)
        
obj._from_base_class type(obj) == NewedBaseCheck
        
return obj
    def __init__
(self):
        
self.5

newed 
NewedBaseCheck()
newed.== 5
newed
._from_base_class is True 

__new__ takes a class instead of an instance as the first argument. Since it creates an instance, that makes sense. super(NewedClass, cls).__new__(cls) is very important. We don’t want to call object.__new__ directly; again, you’ll see why later.

Why is from_base_class defined in __new__ instead of __init__? It’s metadata about object creation, which makes more semantic sense in __new__. However, if you really wanted to, you could place define _from_base_class:

class StandardBaseCheck(object):
    
def __init__(self):
        
self.5
        self
._from_base_class == type(self) == StandardBaseCheck

standard_base_check 
StandardBaseCheck()
standard_base_check.== 5
standard_base_check
._from_base_class is True 

There is a major behavioral difference between NewBaseCheck and StandardBaseCheck in how they handle inheritance:

class SubNewedBaseCheck(NewedBaseCheck):
    
def __init__(self):
        
self.9

subnewed 
SubNewedBaseCheck()
subnewed.== 9
subnewed
._from_base_class is False

class SubStandardBaseCheck(StandardBaseCheck):
    
def __init__(self):
        
self.9

substandard_base_check 
SubStandardBaseCheck()
substandard_base_check.== 9
hasattr
(substandard_base_check,"_from_base_class"is False 

Because we failed to call super(...).__init__ in the constructors, _from_base_class is never set.

__new__ and __init__

Up until now, classes defining both __init__ and __new__ had no-argument constructors. Adding arguments has a few pitfalls to watch out for. We’ll modify NewBaseCheck:

class NewedBaseCheck(object):
    
def __new__(cls):
        
obj super(NewedBaseCheck,cls).__new__(cls)
        
obj._from_base_class type(obj) == NewedBaseCheck
        
return obj

    def __init__
(selfx):
        
self.x

try:
    
NewedBaseCheck(5)
except TypeError:
    print 
True 

Instantiating a new NewedBaseCheck throws a TypeError. NewedBaseCheck(5) first calls NewBaseCheck.__new__(NewBaseCheck, 5). Since __new__ takes only one argument, Python complains. Let’s fix this:

class NewedBaseCheck(object):
    
def __new__(clsx):
        
obj super(NewedBaseCheck,cls).__new__(cls)
        
obj._from_base_class type(obj) == NewedBaseCheck
        
return obj

    def __init__
(selfx):
        
self.x

newed 
NewedBaseCheck(5)
newed.== 

There are still problems with subclassing:

class SubNewedBaseCheck(NewedBaseCheck):
    
def __init__(selfxy):
        
self.x
        self
.y

try:
    
SubNewedBaseCheck(5,6)
except TypeError:
    print 
True 

We get the same TypeError as above; __new__ takes cls and x, and we’re trying to pass in cls, x, and y. The generic fix is fairly simple:

class NewedBaseCheck(object):
    
def __new__(cls, *args, **kwargs):
        
obj super(NewedBaseCheck,cls).__new__(cls)
        
obj._from_base_class type(obj) == NewedBaseCheck
        
return obj

    def __init__
(selfx):
        
self.x

newed 
NewedBaseCheck(5)
newed.== 5

subnewed 
SubNewedBaseCheck(5,6)
subnewed.== 5
subnewed
.== 

Unless you have a good reason otherwise, always define __new__ with *args and **kwargs.

The Real Power of __new__

__new__ is incredibly powerful (and dangerous) because you manually return an object. There are no limitations to the type of object you return.

class GimmeFive(object):
    
def __new__(cls, *args, **kwargs)):
        return 
5

GimmeFive
() == 

If __new__ doesn’t return an instance of the class it’s bound to (e.g. GimmeFive), it skips the Constructor Step entirely:

class GimmeFive(object):
    
def __new__(cls, *args, **kwargs):
        return 
5

    def __init__
(self,x):
        
self.x

five 
GimmeFive()
five == 5
isinstance
(five,intis True
hasattr
(five"x"is False 

That makes sense: __init__ will throw an error if passed anything but an instance of GimmeFive, or a subclass, for self. Knowing all this, we can easily define Python’s object creation process:

def instantiate(cls, *args, **kwargs):
    
obj cls.__new__(cls, *args, **kwargs)
    if 
isinstance(obj,cls):
        
cls.__init__(obj, *args, **kwargs)
    return 
obj

instantiate
(GimmeFive) == 5
newed 
instantiate(NewedBaseCheck5)
type(newed) == NewedBaseCheck
newed
.== 

Don’t Do This. Ever.

While experimenting for this post I created a monster that, like Dr. Frankenstein, I will share with the world. It is a great example of how horrifically __new__ can be abused. (Seriously, don’t ever do this.)

class A(object):
    
def __new__(cls):
        return 
super(A,cls).__new__(B)
    
def __init__(self):
        
self.name "A"

class B(object):
    
def __new__(cls):
        return 
super(B,cls).__new__(A)
    
def __init__(self):
        
self.name "B"

A()
B()
type(a) == B
type
(b) == A
hasattr
(a,"name") == False
hasattr
(b,"name") == False 

The point of the above code snippet: please use __new__ responsibly; everyone you code with will thank you.

__new__ and the new step, in the right hands and for the right task, are powerful tools. Conceptually, they neatly tie together object creation. Practically, they are a blessing when you need them. They also have a dark side. Use them wisely.

Ross Lodge // December 13, 2012

Implementing WS-Security with CXF in a WSDL-First Web Service

blog-image-5.jpg image

Security is one of the most common requirements for SOAP-based web services. Several standards exist, among them WS-Security and WS-SecurityPolicy. They can be hard to implement, and they are often ignored in favor of a more ad hoc security standard, most often using password authentication in the message itself and SSL for transport layer security.

Trying to implement these standards recently, I had a very hard time finding a consistent and complete guide for doing so, or even a good explanation of the standards themselves. I did find good information on Glen Mazza’s Blog, and my implementation and this tutorial owe much to that information. But that tutorial is based on another one which is in turn based on another one. I found it difficult to filter through the layers to find what was necessary. Thus, I wrote this to provide a more complete and easy to use guide.

This tutorial will try to take you step-by-step through adding a security policy to an existing working web service WSDL as well as adding the additional CXF and Spring configuration necessary to make it work. It will not tell you how to build a CXF web service to start with, or how to configure Spring to make it work.

Tools Needed

  • Maven, 2.2.1 or better
  • JDK 1.6 or better

Code

You can find the code necessary for the tutorial here:

Technologies and Techniques Used

This tutorial uses Apache CXF to provide the backing for a JAX-WS web service which is built WSDL-First.

It uses CXF instead of the Glassfish jaxws-ri implementation or the embedded JDK implementation because I found getting jaxws-ri to do the same thing very cumbersome: it needed to reside in an endorsed standards directory (which puts an installation burden on any system administrators using the product); it requires annotations in the WSDL to work correctly; it requires different annotations for the client and server, so two WSDL versions need maintenance; and it failed with a fatal bug when SOAP faults were returned. CXF exhibited none of these problems, and was easy to integrate with Spring. That said, we generate the JAX-WS and JAXB code with Sun/Oracle’s standard tools to make sure they’re compliant.

The service is built WSDL-first because I believe that this is the most implementation-independent way of producing a SOAP-based web service, and because I think it gives you better interfaces by forcing you to think of them as services, rather than as java methods. It also allows us to clearly specify the security policy, which makes it easier for service consumers to comply.

This example also uses a multi-module Maven project which separates the WSDL, the generated JAX-WS code, and the service implementation/WAR into separate modules, which allows for easy re-use of the WSDL and/or the generated code.

The tutorial example also uses Spring, and the starting code consists of a complete working web service, packaged as a WAR, configured via Spring. Although various techniques are used to construct the configuration, I won’t be explaining the base Maven or Spring configuration in detail.

That said, there are some “tricks” in the code that might cause problems moving this example into an existing web service project:

  • The WSSecurityTutorialJaxWs project uses binding customizations to make the generated code more Java-friendly. These are like any other standard JAX-WS binding customizations, but you should note they exist.
  • The WSSecurityTutorialJaxWs unpacks the WSDL into a temporary directory for generation; it also unpacks the WSDL into the target/classes directory so that it ends up in the final WAR. This is because various tools, including CXF, can load the WSDL from the classpath rather than from the endpoint server, and so it is added to the jar as a convenience.
  • The WSSecurityTutorialWAR module is configured by various files through Spring, using an extension of Spring’s property placeholder functionality which will, if necessary, read properties from system property or JNDI env values. There are three tiers of property configuration files: a default one, a deployment one, and a test one. The intent is for the default one (in src/main/resources) to be rolled into the WAR, for the deployment one to be modified and deployed to the deployment server’s file system, and its location specified via a system property or JNDI value.
  • SLF4J is used for logging, and configuration files in the META-INF directories of the WAR and test classpaths force CXF to use SLF4J as well.
  • The WAR module also uses TestNG instead of JUnit, which allows us to “group” tests. A normal build will run the “unit” and “local-integration” groups. Adding the “integration-test” profile to the build (e.g., ‘mvn clean install -Pintegration-test’) executes the “remote-integration” group and uses a plugin to start Tomcat so that the service can be tested running in a container.

Getting Started

You can download the starting code here. If you unzip that, you should be able to CD, on the command line, into the WSSecurityTutorialParent module and execute “mvn clean install -Pintegration-test” successfully. If not, you have something wrong with your environment, and you will have to diagnose it before you can continue.

Altering the WSDL

To begin, you have to decide what the service’s security policy will actually be, and modify the WSDL to specify it.

Aside from the specifications themselves, there seems to be precious little information about the security specification standard (WS-SecurityPolicy) available. Some information can be found http://wso2.org/library/3132 here, here, and here.

Basically to declare a security policy for your web service, you have to define the policy using the http://schemas.xmlsoap.org/ws/2004/09/policy (wsp) and http://schemas.xmlsoap.org/ws/2005/07/securitypolicy (sp) schemas in your WSDL, and then attach the policy declarations to the service, operation, and/or input/output bindings that you want controlled by that policy.

A policy is declared with the “WS-Policy” schema/vocabulary (http://schemas.xmlsoap.org/ws/2004/09/policy wsp), and looks like this, basically:

WS-Policy Declaration

<wsp:Policy wsu:Id="UniqueIdentifier">
    <
wsp:ExactlyOne>
        ...
    </
wsp:ExactlyOne>
</
wsp:Policy

Inside the policy declaration, which in itself doesn’t define what the policy is, you need to add security policy declarations. These are defined by the (http://schemas.xmlsoap.org/ws/2005/07/securitypolicy sp) schema, and there are a large number of variations, as defined in the specification linked above.

Basically, for our tutorial, we want to require that the body and custom headers of our messages are signed with a X.509 certificate (for source authentication), and that the body of our messages is encrypted with an X.509 certificate (for message privacy).

A policy to encrypt an input or output message is pretty simple, and looks basically like this:

WS-SecurityPolicy Input/Output Declaration

<wsp:Policy wsu:Id="InputOutputUniqueIdentifier">
    <
wsp:ExactlyOne>
        <
wsp:All>
            <
sp:EncryptedParts>
                <
sp:Body />
            </
sp:EncryptedParts>
            <
sp:SignedParts>
                <
sp:Body />
                <
sp:Header Namespace="http://example.com/tutotial/"/>
            </
sp:SignedParts>
        </
wsp:All>
    </
wsp:ExactlyOne>
</
wsp:Policy

This says any operation whose input or output is linked to InputOutputUniqueIdentifier must have an encrypted body and must have a signed body and headers (the signed headers are all in the given namespace).

In theory we could require that the headers also be encrypted, but there is a CXF bug which prevents this from working (CXF-3452; also see related CXF-3453).

We then need to declare, for the entire service binding, how the input/output binding will take place (what kinds of tokens, how the tokens are exchanged, etc.). The options here are complex, and aside from the rather opaque specification, there’s not much explanatory documentation available.

WS-SecurityPolicy Binding Policy Declaration

<wsp:Policy wsu:Id="UniqueBindingPolicyIdentifier">
    <
wsp:ExactlyOne>
        <
wsp:All>
            <
sp:AsymmetricBinding>
                <
wsp:Policy>
                    <
sp:InitiatorToken>
                        <
wsp:Policy>
                            <
sp:X509Token sp:IncludeToken="http://schemas.xmlsoap.org/ws/2005/07/securitypolicy/IncludeToken/AlwaysToRecipient">
                                <
wsp:Policy>
                                    <
sp:WssX509V3Token11 />
                                </
wsp:Policy>
                            </
sp:X509Token>
                        </
wsp:Policy>
                    </
sp:InitiatorToken>
                    <
sp:RecipientToken>
                        <
wsp:Policy>
                            <
sp:X509Token sp:IncludeToken="http://schemas.xmlsoap.org/ws/2005/07/securitypolicy/IncludeToken/Never">
                                <
wsp:Policy>
                                    <
sp:WssX509V3Token11 />
                                    <
sp:RequireIssuerSerialReference />
                                </
wsp:Policy>
                            </
sp:X509Token>
                        </
wsp:Policy>
                    </
sp:RecipientToken>
                    <
sp:Layout>
                        <
wsp:Policy>
                            <
sp:Strict />
                        </
wsp:Policy>
                    </
sp:Layout>
                    <
sp:IncludeTimestamp />
                    <
sp:OnlySignEntireHeadersAndBody />
                    <
sp:AlgorithmSuite>
                        <
wsp:Policy>
                            <
sp:Basic128 />
                        </
wsp:Policy>
                    </
sp:AlgorithmSuite>
                    <
sp:EncryptSignature />
                </
wsp:Policy>
            </
sp:AsymmetricBinding>
            <
sp:Wss11>
                <
wsp:Policy>
                    <
sp:MustSupportRefIssuerSerial />
                </
wsp:Policy>
            </
sp:Wss11>
        </
wsp:All>
    </
wsp:ExactlyOne>
</
wsp:Policy

This says an AsymmetricBinding will be used (asymmetric or public/private keys rather than symmetric encryption); the initiator must always include an X.509 token; the return message will also be signed/encrypted with an X.509 certificate, but the token itself will not be included and instead an issuer serial # reference will be included. Additionally, strict header layout is used; a timestamp is included and messages will be rejected if the timestamp is too far out-of-date (to avoid replay attacks); only complete headers and bodies must be signed rather than child elements of either; the “Basic128” algorithm suite is used; the signature itself must be encrypted; and the caller must support issuer serial references.

If we wanted to include a further layer of security for message transport, or wanted to use transport encryption instead of message-level encryption, we could add something like:

HTTPS Transport Policy Declaration

<sp:TransportToken>
    <
wsp:Policy>
        <
sp:HttpsToken />
    </
wsp:Policy>
</
sp:TransportToken

So to implement these assertions, you should do the following:

Add to the attributes of your wsdl:definitions element:

  • xmlns:wsu=”http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd”
  • xmlns:wsp=”http://schemas.xmlsoap.org/ws/2004/09/policy”
  • xmlns:sp=”http://schemas.xmlsoap.org/ws/2005/07/securitypolicy”

I also added, for editor convenience:

Add the complete set of declarations to your WSDL (I added them as the last elements in the WSDL):

Complete Tutorial Binding Assertion

<wsp:Policy wsu:Id="TutorialBindingPolicy">
    <
wsp:ExactlyOne>
        <
wsp:All>
            <
sp:AsymmetricBinding>
                <
wsp:Policy>
                    <
sp:InitiatorToken>
                        <
wsp:Policy>
                            <
sp:X509Token sp:IncludeToken="http://schemas.xmlsoap.org/ws/2005/07/securitypolicy/IncludeToken/AlwaysToRecipient">
                                <
wsp:Policy>
                                    <
sp:WssX509V3Token11 />
                                </
wsp:Policy>
                            </
sp:X509Token>
                        </
wsp:Policy>
                    </
sp:InitiatorToken>
                    <
sp:RecipientToken>
                        <
wsp:Policy>
                            <
sp:X509Token sp:IncludeToken="http://schemas.xmlsoap.org/ws/2005/07/securitypolicy/IncludeToken/Never">
                                <
wsp:Policy>
                                    <
sp:WssX509V3Token11 />
                                    <
sp:RequireIssuerSerialReference />
                                </
wsp:Policy>
                            </
sp:X509Token>
                        </
wsp:Policy>
                    </
sp:RecipientToken>
                    <
sp:Layout>
                        <
wsp:Policy>
                            <
sp:Strict />
                        </
wsp:Policy>
                    </
sp:Layout>
                    <
sp:IncludeTimestamp />
                    <
sp:OnlySignEntireHeadersAndBody />
                    <
sp:AlgorithmSuite>
                        <
wsp:Policy>
                            <
sp:Basic128 />
                        </
wsp:Policy>
                    </
sp:AlgorithmSuite>
                    <
sp:EncryptSignature />
                </
wsp:Policy>
            </
sp:AsymmetricBinding>
            <
sp:Wss11>
                <
wsp:Policy>
                    <
sp:MustSupportRefIssuerSerial />
                </
wsp:Policy>
            </
sp:Wss11>
        </
wsp:All>
    </
wsp:ExactlyOne>
</
wsp:Policy>
<
wsp:Policy wsu:Id="TutorialInputBindingPolicy">
    <
wsp:ExactlyOne>
        <
wsp:All>
            <
sp:EncryptedParts>
                <
sp:Body />
            </
sp:EncryptedParts>
            <
sp:SignedParts>
                <
sp:Body />
                <
sp:Header Namespace="http://example.com/tutotial/"/>
            </
sp:SignedParts>
        </
wsp:All>
    </
wsp:ExactlyOne>
</
wsp:Policy>
<
wsp:Policy wsu:Id="TutorialOutputBindingPolicy">
    <
wsp:ExactlyOne>
        <
wsp:All>
            <
sp:EncryptedParts>
                <
sp:Body />
            </
sp:EncryptedParts>
            <
sp:SignedParts>
                <
sp:Body />
                <
sp:Header Namespace="http://example.com/tutotial/"/>
            </
sp:SignedParts>
        </
wsp:All>
    </
wsp:ExactlyOne>
</
wsp:Policy

You then must “reference” the policy declarations where you want them used. To each wsdl:binding element where the binding policy should apply, add:

Binding Policy Reference

<wsp:PolicyReference URI="#TutorialBindingPolicy" /> 

For each input element where the policy should apply, add:

Input Policy Reference

<wsp:PolicyReference URI="#TutorialInputBindingPolicy"/> 

For each output element where the policy should apply, add:

Output Policy Reference

<wsp:PolicyReference URI="#TutorialOutputBindingPolicy"/> 

So, for instance, the tutorial’s code:

Complete Tutorial Binding

<wsdl:binding name="TutorialWebServiceSOAP" type="tns:TutorialWebService">
    <
wsp:PolicyReference URI="#TutorialBindingPolicy" />
    <
soap:binding style="document" transport="http://schemas.xmlsoap.org/soap/http" />
    <
wsdl:operation name="sendTutorialMessage">
        <
soap:operation soapAction="http://example.com/tutotial/sendTutorialMessage" />
        <
wsdl:input>
            <
wsp:PolicyReference URI="#TutorialInputBindingPolicy"/>
            <
soap:body use="literal" parts="parameters" />
            <
soap:header use="literal" part="source" message="tns:TutorialRequest"/>
        </
wsdl:input>
        <
wsdl:output>
            <
wsp:PolicyReference URI="#TutorialOutputBindingPolicy"/>
            <
soap:body use="literal" parts="response"/>
            <
soap:header use="literal" part="acknowledgment" message="tns:TutorialResponse"/>
        </
wsdl:output>
        <
soap:address location="http://localhost/" />
    </
wsdl:port>
</
wsdl:service

Implementing the Binding

Now you need to get CXF to read, enforce, and support the binding on the server and client. In our example, the server is the end-result WAR of the WAR module, and the client example is the integration test cases in that module.

Dependencies

To do this, you will need to add additional CXF dependencies: one to support WS-Policy, one to support WS-Security, and one as an encryption provider.

In the tutorial example, the Parent module controls the versions, exclusions, etc. of all dependencies, so to the dependencyManagement element of the Parent POM, add:

New Dependency Management Entries

<dependency>
    <
groupId>org.apache.cxf</groupId>
    <
artifactId>cxf-rt-ws-security</artifactId>
    <
version>${cxf.version}</version>
    <
exclusions>
        <
exclusion>
            <
groupId>commons-logging</groupId>
            <
artifactId>commons-logging</artifactId>
        </
exclusion>
    </
exclusions>
</
dependency>
<
dependency>
    <
groupId>org.apache.cxf</groupId>
    <
artifactId>cxf-rt-ws-policy</artifactId>
    <
version>${cxf.version}</version>
    <
exclusions>
        <
exclusion>
            <
groupId>commons-logging</groupId>
            <
artifactId>commons-logging</artifactId>
        </
exclusion>
    </
exclusions>
</
dependency>
<
dependency>
    <
groupId>org.bouncycastle</groupId>
    <
artifactId>bcprov-jdk16</artifactId>
    <
version>${bouncycastle.version}</version>
    <
exclusions>
        <
exclusion>
            <
groupId>commons-logging</groupId>
            <
artifactId>commons-logging</artifactId>
        </
exclusion>
    </
exclusions>
</
dependency

And to the WAR module’s POM:

New WAR Dependency Entries

<dependency>
    <
groupId>org.apache.cxf</groupId>
    <
artifactId>cxf-rt-ws-security</artifactId>
</
dependency>
<
dependency>
    <
groupId>org.apache.cxf</groupId>
    <
artifactId>cxf-rt-ws-policy</artifactId>
</
dependency>
<
dependency>
    <
groupId>org.bouncycastle</groupId>
    <
artifactId>bcprov-jdk16</artifactId>
</
dependency

New Spring Configuration Files

These new dependencies allow CXF to process the policy declarations and the new headers. To activate them, you need to load the CXF Spring configuration files for those new CXF modules. So, to the WAR’s web.xml you should add, right under the existing classpath:META-INF/cxf/cxf-servlet.xml entry:

New Spring Files

classpath:META-INF/cxf/cxf-extension-policy.xml
classpath
:META-INF/cxf/cxf-extension-ws-security.xml 

And to your client, right after classpath*:/META-INF/cxf/cxf-extension-http.xml, you should add the same two XML files. For the tutorial, this is done in the ContextConfigurations attribute of TutorialWebServiceTest.java.

I found, when experimenting with this, that the CXF configuration files are sensitive to the order in which they are loaded by Spring – so the order specified above for the two new files, and where they are placed relative to existing CXF configurations, seems to be important.

Generate Certificates

Unless you have existing X.509 certificates for your client and server, you are going to have to generate new ones. Of course, for a production scenario, you should have issuer-signed certificates from a recognized authority such as Verisign, but for testing and development, and for this tutorial, self-signed certificates can be used. You can use the Java keytool for this; you will need to create two keystores (client and server), generate a client key and a server key, export the public keys, and import the public keys into the opposite number’s keystore. A script to do this is here:

generate-keys.sh

#!/bin/bash

# Set the values we'll use for the generation
read -p"Server Key Alias?" serverkeyalias
read 
-p"Server Key Password?" serverkeypassword
read 
-p"Server Keystore Password?" serverstorepassword
read 
-p"Server Keystore File Name?" serverkeystorename

read 
-p"Client Key Alias?" clientkeyalias
read 
-p"Client Key Password?" clientkeypassword
read 
-p"Client Keystore Password?" clientstorepassword
read 
-p"Client Keystore File Name?" clientkeystorename

# Generate the server and client keys
keytool -genkey -alias $serverkeyalias -keyalg RSA -sigalg SHA1withRSA -keypass $serverkeypassword -storepass $serverstorepassword -keystore $serverkeystorename -dname "cn=localhost"
keytool -genkey -alias $clientkeyalias -keyalg RSA -sigalg SHA1withRSA -keypass $clientkeypassword -storepass $clientstorepassword -keystore $clientkeystorename -dname "cn=clientuser"

# Export the client key and import it to the server keystore
keytool -export -rfc -keystore $clientkeystorename -storepass $clientstorepassword -alias $clientkeyalias -file $clientkeyalias.cer
keytool 
-import -trustcacerts -keystore $serverkeystorename -storepass $serverstorepassword -alias $clientkeyalias -file $clientkeyalias.cer -noprompt
rm $clientkeyalias
.cer

# Export the server key and import it to the client keystore
keytool -export -rfc -keystore $serverkeystorename -storepass $serverstorepassword -alias $serverkeyalias -file $serverkeyalias.cer
keytool 
-import -trustcacerts -keystore $clientkeystorename -storepass $clientstorepassword -alias $serverkeyalias -file $serverkeyalias.cer -noprompt
rm $serverkeyalias
.cer 

Of course you should note or remember the necessary passwords; you will need them later.

These keystores need to be placed where the server or client can read them. For the tutorial, the client keystore goes into src/test/resources, and the server one goes into src/main/springconfig/local. You will later need to tell the client and server, via Spring properties, where these are.

Create a CallbackHandler

To get passwords for specific keys, CXF uses an implementation of javax.security.auth.callback.CallbackHandler. If you don’t already have one, you will need to create one. Create a new java class that implements javax.security.auth.callback.CallbackHandler that handles callbacks of type org.apache.ws.security.WSPasswordCallback. For example:

KeystorePasswordCallback.java

package com.example.tutorial.ws.security;

import java.io.IOException;
import java.util.HashMap;
import java.util.Map;

import javax.security.auth.callback.Callback;
import javax.security.auth.callback.CallbackHandler;
import javax.security.auth.callback.UnsupportedCallbackException;

import org.apache.ws.security.WSPasswordCallback;

/**
 * Really callback for key passwords.  Configure it with a map
 * of key-alias-to-password mappings.  Obviously this could
 * be extended to encrypt or obfuscate these passwords if desired.
 */
public class KeystorePasswordCallback implements CallbackHandler
{

    
private Map<StringStringpasswords = new HashMap<StringString>();

    
/**
     * {@inheritDoc}
     * 
     * @see javax.security.auth.callback.CallbackHandler#handle(javax.security.auth.callback.Callback[])
     */
    
public void handle(Callback[] callbacksthrows IOExceptionUnsupportedCallbackException
    {
        
for (Callback callback callbacks)
        
{
            
if (callback instanceof WSPasswordCallback)
            
{
                WSPasswordCallback pc 
= (WSPasswordCallback)callback;
    
                
String pass passwords.get(pc.getIdentifier());
                if (
pass != null)
                
{
                    pc
.setPassword(pass);
                    return;
                
}
            }
        }
    }

    
/**
     * @return the passwords
     */
    
public Map<StringStringgetPasswords()
    
{
        
return passwords;
    
}

    
/**
     * @param passwords the passwords to set
     */
    
public void setPasswords(Map<StringStringpasswords)
    
{
        this
.passwords passwords;
    
}
    

Configure the Service

Next, you will need to configure the web service to handle WS-Security. Assuming you already have a CXF service defined in a Spring configuration file, you need to add:

  • The CallbackHandler you just created, with necessary passwords
  • A series of properties for the keystore to be used by the service
  • The key alias to be used for signing

To do this to the tutorial code, find cxf-service-config.xml, and add:

cxf-service-config.xml Additions

<bean id="keystorePasswordCallback" class="com.example.tutorial.ws.security.KeystorePasswordCallback">
    <
property name="passwords">
        <
map>
            <
entry key="${wss.keyAlias}" value="${wss.keyPassword}"/>
        </
map>
    </
property>
</
bean>

<
util:properties id="keystoreProperties">
    <
prop key="org.apache.ws.security.crypto.provider">org.apache.ws.security.components.crypto.Merlin</prop>
    <
prop key="org.apache.ws.security.crypto.merlin.keystore.type">${wss.keystoreType}</prop>
    <
prop key="org.apache.ws.security.crypto.merlin.keystore.password">${wss.keystorePassword}</prop>
    <
prop key="org.apache.ws.security.crypto.merlin.keystore.alias">${wss.keyAlias}</prop>
    <
prop key="org.apache.ws.security.crypto.merlin.file">${wss.keystorePath}</prop>
</
util:properties

These define a password callback, with a key alias entry and password, and the properties to manage the keystore. Note that all these entries are defined Spring property tokens; you will define these soon.

And to the existing jaxws:endpoint/jaxws:properties in that file, add:

cxf-service-config.xml Additions

<entry key="ws-security.callback-handler" value-ref="keystorePasswordCallback"/>
<
entry key="ws-security.encryption.properties" value-ref="keystoreProperties"/>
<
entry key="ws-security.signature.properties" value-ref="keystoreProperties"/>
<
entry key="ws-security.encryption.username" value="useReqSigCert"/> 

The entry useReqSigCert tells CXF to “encrypt the response with the same certificate that signed the request”.

In this example we also use the same keystore properties for encryption and signature; if you have separate key and trust stores, you can create separate properties with different values.

Because most of the entries above are Spring property tokens, we need to enter the correct values into the property file that’s being used by Spring to store these values. For the tutorial, add to TutorialDeploymentPropertyPlaceholders.properties:

TutorialDeploymentPropertyPlaceholders.properties

wss.keyAlias=the key alias you generated the service key with
wss
.keyPassword=the key password you generated the service key with
wss
.keystoreType=jks
wss
.keystorePassword=the key store password you generated the service key with
wss
.keystorePath=${configDirectory}/the name you gave the service keystore 

Configure the Client

The client configuration is essentially the same, with very minor changes. For the tutorial:

war-spring-test.xml Additions

<bean id="keystorePasswordCallback" class="com.example.tutorial.ws.security.KeystorePasswordCallback">
    <
property name="passwords">
        <
map>
            <
entry key="${wss.keyAlias}" value="${wss.keyPassword}"/>
        </
map>
    </
property>
</
bean>

<
util:properties id="keystoreProperties">
    <
prop key="org.apache.ws.security.crypto.provider">org.apache.ws.security.components.crypto.Merlin</prop>
    <
prop key="org.apache.ws.security.crypto.merlin.keystore.type">${wss.keystoreType}</prop>
    <
prop key="org.apache.ws.security.crypto.merlin.keystore.password">${wss.keystorePassword}</prop>
    <
prop key="org.apache.ws.security.crypto.merlin.keystore.alias">${wss.keyAlias}</prop>
    <
prop key="org.apache.ws.security.crypto.merlin.file">${wss.keystorePath}</prop>
</
util:properties>

...

<
jaxws:client ...

    <
jaxws:properties>
        <
entry key="ws-security.callback-handler" value-ref="keystorePasswordCallback"/>        
        <
entry key="ws-security.encryption.properties" value-ref="keystoreProperties"/>
        <
entry key="ws-security.signature.properties" value-ref="keystoreProperties"/>
        <
entry key="ws-security.encryption.username" value="${serverKeyAlias}"/>
    </
jaxws:properties>
... 

Note that this one uses a specific key alias for the “username”.

Then to the properties file:

TutorialTestPropertyPlaceholders.properties

wss.keyAlias=the alias you used to generate the client key
wss
.keyPassword=the key password you used to generate the client key
wss
.keystoreType=jks
wss
.keystorePassword=the store password you used to generate the client key
wss
.keystorePath=${configDirectory}/the name you gave to the client keystore
wss
.serverKeyAlias=the server key alias you used to generate the server key 

Run and Test

This (should) be it. You should be able now to run the service and test its encryption functionality. The tutorial code has logging interceptors turned on so you can see the encrypted and signed messages.

Notes about the Encrypted Messages

Hopefully, if everything works, the exchanged messages should look much like this:

An Encrypted Message

<soap:Envelope
    xmlns
:soap="http://schemas.xmlsoap.org/soap/envelope/"
    
xmlns:xenc="http://www.w3.org/2001/04/xmlenc#">
    <
soap:Header>
        <
ns2:message-source
            xmlns
="http://example.com/tutotial/types/"
            
xmlns:ns2="http://example.com/tutotial/"
            
xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"
            
message-identifier="SYSTEM FAILURE"
            
system-identifier="test"
            
wsu:Id="Id-1337947189"/>
        <
wsse:Security
            xmlns
:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"
            
soap:mustUnderstand="1">
            <
wsse:BinarySecurityToken
                xmlns
:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"
                
xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"
                
EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary"
                
ValueType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3"
                
wsu:Id="CertId-ACAFC43C228502A539130230131374311">MIIBoTCCAQqgAwIBAgIETSyeWTANBgkqhkiG9w0BAQUFADAVMRMwEQYDVQQDEwpjbGllbnR1c2VyMB4XDTExMDExMTE4MTU1M1oXDTExMDQxMTE4MTU1M1owFTETMBEGA1UEAxMKY2xpZW50dXNlcjCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAgtoyaaP/nPzb7aW9VlRJTGDJENKMy87kewpN2z3TxMdzxsaFxQxtnnW+/iw+9kPAoEWQhFDIO7SG1VCEQrTfrefQ5b2fZZkeEKpMAc/Ls1BQxR7REUlBH7AhDNEu00tAhd0Rg7DUdIHhwI1phkEgasK13t7XxGMuzjb3MxdV5ZkCAwEAATANBgkqhkiG9w0BAQUFAAOBgQA+9VHcnZK2DAbkbNAdur/u6hPSGQz3s1l0ZK+WpKkRrSMh7P/eNZM8lDZnhJbjdyroU1u2X8DIgasQ+CCoHqSltwOpo75VrRCNbjBYATL+SEpU8zh37zO8jQVe4Bte6AAFQ1zFPpEAqgSVgxhNtXLPDrLVoos2svONEqd9wa4XuA==</wsse:BinarySecurityToken>
            <
wsu:Timestamp
                xmlns
:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"
                
wsu:Id="Timestamp-9">
                <
wsu:Created>2011-04-08T22:21:53.743Z</wsu:Created>
                <
wsu:Expires>2011-04-08T22:26:53.743Z</wsu:Expires>
            </
wsu:Timestamp>
            <
xenc:EncryptedKey
                xmlns
:xenc="http://www.w3.org/2001/04/xmlenc#"
                
Id="EncKeyId-ACAFC43C228502A539130230131375015">
                <
xenc:EncryptionMethod
                    Algorithm
="http://www.w3.org/2001/04/xmlenc#rsa-oaep-mgf1p"/>
                <
ds:KeyInfo
                    xmlns
:ds="http://www.w3.org/2000/09/xmldsig#">
                    <
wsse:SecurityTokenReference
                        xmlns
:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd">
                        <
ds:X509Data>
                            <
ds:X509IssuerSerial>
                                <
ds:X509IssuerName>CN=localhost</ds:X509IssuerName>
                                <
ds:X509SerialNumber>1294769753</ds:X509SerialNumber>
                            </
ds:X509IssuerSerial>
                        </
ds:X509Data>
                    </
wsse:SecurityTokenReference>
                </
ds:KeyInfo>
                <
xenc:CipherData>
                    <
xenc:CipherValue>Kr5zeACNhaKl+INqWlI7moEbdSp1o8q7w0RUTTESxAnc9cKrjw1JPyM7VclXSIKOyqUQ81HbeypdiVUKNXMtpxUIcpGxQAtVeDec8nYZgNfBA6LRUh2xMe8QEf43UVKxS9MCvepS+J3tjhjSB4KJLR0mz15Ii0Gx/FJdjBt+RDM=</xenc:CipherValue>
                </
xenc:CipherData>
                <
xenc:ReferenceList>
                    <
xenc:DataReference
                        URI
="#EncDataId-11"/>
                    <
xenc:DataReference
                        URI
="#EncDataId-12"/>
                </
xenc:ReferenceList>
            </
xenc:EncryptedKey>
            <
xenc:EncryptedData
                xmlns
:xenc="http://www.w3.org/2001/04/xmlenc#"
                
Id="EncDataId-12"
                
Type="http://www.w3.org/2001/04/xmlenc#Element">
                <
xenc:EncryptionMethod
                    Algorithm
="http://www.w3.org/2001/04/xmlenc#aes128-cbc"/>
                <
ds:KeyInfo
                    xmlns
:ds="http://www.w3.org/2000/09/xmldsig#">
                    <
wsse:SecurityTokenReference
                        xmlns
:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd">
                        <
wsse:Reference
                            xmlns
:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"
                            
URI="#EncKeyId-ACAFC43C228502A539130230131375015"/>
                    </
wsse:SecurityTokenReference>
                </
ds:KeyInfo>
                <
xenc:CipherData>
                    <
xenc:CipherValue>glktSJ3F6cyD/b60F9mpkR0cBwttOxv7pRWxYfzqZS+8UcSnk52LhXpU1UwGiYD+53ULAdS0S4Av
09Fm8
/bYTJd8gzuPoSXI1HZCEkEV7WoappMX+QDQRSf9Vusd4W5uGSiecN7twDx8l4uAS2Ipj592
vQWSD
+Dpm9YkNZOhSj+vkg6Az2lVtf8Zl8SgEawuIztYzVlrfsTdC39KprD3lDPlzgZOM+t4tmTw
3fbggRvMrtKviIdDLJZau8G78PtmvtuD4pXfl
/8d+foBuTk/pcofNJ4Pv/gfbnmI9UnMXu+du0nK
c9ZFRz9Y5dnEfMqCy7yYS3Yqm
+7OhD+UzGa6IyoVD4E/TSFqK7SGvSVOS6qkNOQaIyxmUHYi52HP
Ab5cEnirab
+rxZX9TWpfKI/TIrlAGWbbwJbk1SsAeRbLic7qgWCY23UQX5iwz3kfEOfi8NahRADZ
2s4DnOe
/hYqz6ml3sL/KxA7nhfCAdDv9oheeMvU2b46MOo1Z0hIa8zvCU8NHnexFIp3wQJUqDWgA
mGHnVBiJuT8OUpdIeCA
/hKd6ICAzG2isOsO9IWFb87aTD70xPTbI0lcaaB+5R9ZwuwVWxzr0T1K9
233bzcbUY9sIVXtzDwShgXxkQQbEDjJD8tTNfz20ZHq955Qe4hgAQXiphZ5gadpcv7PTQm9qzOvU
ZE6IVBol1ENFK0
+VAqGOMS9lvMy68cTkrkc/usRDLZcKVMGpLj/1c5rQ0PrEBbwzxK7R3PHBhBRn
BcxLOgTexLaVgzcCGwrPUkbZW
+RvDQuSl7SxzElbRyHsCKMQTKI6DtmbX72VwOsAhTCEO7WgkxVa
f1eGR
+KSTrlD6nR1xEQ4KzT5ZvZ2RvCGQLO5TqOdFLPPd2tHMui4MQrdSoxpNJdm8FVKJrjG0CNb
REl
/wx8sPWLvleBCh22DvCovldvYZfPR+y0sNRtTtniQ9TBK4IduoTD3Wg+USLek2KgyrHubeTkO
lnzakLOIFAv752NTfdbKt7y8uOl7cuozzWxAScQagXDfMEUjr8HvGiTESEu5OOXpHUhNOGMlIyfG
ZshsmpVPywWB9fVTkwkOME97B8lscSf2AC7gQ
/Ts34tyX8IszF7sZFKFU20s/TYqBh2erwqEI9LN
CQ3ARK
+Tak5CkVIjCy15ESoORsmBMApI5r/GO+dIwWKtEL4MLP7GypEJn39gGeQHFzOygx0U463t
4QkrUpOfju6UyHGAb4hzHw6EpsgntMz6hreReNoXbNCnFtaK0llN3An22wK8dAKTHI6mpK
+u6AK/
x1ibN1s5LlGYTvLC8VY8XOCTT9WwV74N5MjdmEcNDk1PwrjtcMxE5Os8J0emp7S3/zD5JppLdyw8
SK
+jzi1OqH575urtRw2ZSikV7jihEW/CJqSGIJYvWCJ8v8S/sJJOztmXQ/OIhLpl3Mbk6RKIVrHV
svWW
+iOaGuKoXXyuBxdXeCZC9niopmmSlOs3r9ksRW8HHK6YzGTKDHOjVluihTL6lbmNlzvewIrb
aGbKBkhHIlgZfHtRVDD
+HPM+uPugctZrdMoY56L4dBWG3Zv1LqzhSCVab78nfmcH9cKETcr8rY8i
hNPtKQz
+lATV9kDoo5U6aHt/cTV2mhHqI1bQ2ouElJYPncZSoSTvZ/whMu+QMiz7wdflc3xRTzUk
O5zpXZkSfinCTfBu7EizVibf64rdhJIuYrW51lYgBb1gK66HvWe46MNQ8aBUtMAVwiUIO7ABOEwG
UkabDzyxmzV1EirrsYRUDq9E8aJY9FO
+kRx4tHt4kd6as7KIEUWtSjs/oXDCyX0I1oXKRrFnxOWF
4Gj9zOXGNqD5opX6nqKkzB9dvlymGzvdHqlXz375EndCbAeRiI8JbBV6UKkhH2
+NoJwTt5p9nPMn
Ac2txkNksDe8rvmn6fmkKXvguyb
/Y+iJpdxplNdxBHFtFK2BozMvDyCFqylYaFvZVcp3c7E8v2eJ
B
/HsICreV5UFdYpKxOadJO8OngE7xgoV8UdqLj5/o22a9QsiBa8fKXBYiw+po2Aw/W18Znutz6w2
f3elnihXxyZBgSJHfqSI964OB
/ELp0r5kiQzr1WEvSgSYcAShcTCjVvmYwi+OZ1D5daPYMtj4BKN
BQPk5KKr4dQeSi
+56DrZlCAVwldGoWIef0fbfLJvi2lZLLtFOhHpOGSIa6IuNw9czwzQvtsnBQEA
5a
/WTodJ9w93i3tLTdBeEVG4mUkDyo0PR6zpfAbK46K7hoFUtMO0rpwYaW3bKrUvA35lQbNXP20z
u4ZmXNU48bZUijhUs3An
+uQhQIKYdR+Mqt5AmCAfvVPDEM75tlC3OEvflsEu5u7F93uzk5Qej0Hi
2gnOp7YoUgqvkCHbmvrifhSrm8dTi77EDbH3k6YnjqrLKaanYC9o12F1KAYopUBBoyCrqqQJAPm7
mYuxniAqeuKgXZO2et3xin
/Klg0PKtxI6tEC0r4OFGU+woVN/B0wM7n/XFMTu25KZzjaXX1LdBA/
3Q4riSuRuzOHD6kpZUC+k/i5T06EqhLUWyW3hQ6t/9hOVczUuBNR42lsGdGtRGMOey277Gymffg0
VwGfFih6UxSJxAVFWBiMntCxDQHWQ2AM1SGd
+RvoMJtn1UzcxcKUwRbFJDWrWieUOJV0/i6E0rPe
Bug22
/ylmB4jcNLS7u3ble0nsLHm4jvbWreDmWujV9vGEWArn9BBEDTaJLWFEwg6qOw6vVP8RIzc
VdTKcYaNG4PnAbay
</xenc:CipherValue>
                </
xenc:CipherData>
            </
xenc:EncryptedData>
        </
wsse:Security>
    </
soap:Header>
    <
soap:Body
        xmlns
:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"
        
wsu:Id="Id-1857134841">
        <
xenc:EncryptedData
            xmlns
:xenc="http://www.w3.org/2001/04/xmlenc#"
            
Id="EncDataId-11"
            
Type="http://www.w3.org/2001/04/xmlenc#Content">
            <
xenc:EncryptionMethod
                Algorithm
="http://www.w3.org/2001/04/xmlenc#aes128-cbc"/>
            <
ds:KeyInfo
                xmlns
:ds="http://www.w3.org/2000/09/xmldsig#">
                <
wsse:SecurityTokenReference
                    xmlns
:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd">
                    <
wsse:Reference
                        xmlns
:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"
                        
URI="#EncKeyId-ACAFC43C228502A539130230131375015"/>
                </
wsse:SecurityTokenReference>
            </
ds:KeyInfo>
            <
xenc:CipherData>
                <
xenc:CipherValue>1QzMDBtcLlhUOwAeI++PCtUZS+fRwiVly4kOBl+7pNcsFhCudYaySmaKnb1v1Qyh0kmdPxMBjMmD
ZlKTSQESwyDYxlBzu
+UxQJi/ovJ85m+k/MJHNvvsuXMwS7VPPGERvnUDr+tfngWrnhKMdx3hE25t
TIsSkCzd89
/OPfc9xtrtKxMse1JrJhHaDB62xo4JRD7/dSsS2Wh4vQHhlNipm/w/Yf89vSp35kne
DnqdqmJf49jfefiRgo4HfzAv5MAZpgZ25ngzJK7bnuf5oCEUdauWEgvIviJnU1UT6wApvyYeldby
aeacc3YrfOFxtON2dyXnoPAXRRAyNUabDeuagNGVjZoyKfhmmTiPpjXVNrtXePppkCEXq46yKH9i
hmW728l95VBGaUbS3J
/405+ywrr4H5pl6ypMMAY5L/dnk449BpQ3XU/W6JH7UgkmhMwSZMwdt0du
HvNcS
/UpW/gaOREynPTV8EcEKscPty3LM8c00uW59sVIYaYEnED1K68zFmejcHcQj2RwtuSd/6dL
Ao5u7JtM1OmcQ6LT5YddMHQPnThZTGxVQWimGPoU
+089UXUavcIX5nMY/PUY1ISuzVlvRFw+aEwC
Us7Iq05a9F8ZsbQVq7I20qPZGSouDIbPn5rpHEmQf56wB2k1bS
/RqrTqaOXUADlhwWSWfpizT07F
k0QMlOWpyxqJ1q8mnxjllFqVjuu2QulLgyI
+ee3cKzh8Wi0tVwh4wqX+XIXLV0Q+IJ1gs6j7lTAU
nXzaqNgYbVG2cIk70V
+OyKeXh2Og9Z8gBpB09hULj3SRdIeYpuWrsvWdrDbunKl00OBPsbaZSbcg
pK4
/PfNN5II8hVEvF6Fn6V0DkGP9i4c0lT0H52E=</xenc:CipherValue>
            </
xenc:CipherData>
        </
xenc:EncryptedData>
    </
soap:Body>
</
soap:Envelope

The most notable change between this and a “normal” SOAP message is the wsse:Security header and the blocks of xenc:CipherData. I found several things worth noting, because nothing I had read explained how this worked:

  • The wsse:Security element in the header contains the information needed to decrypt and verify the message.
  • The wsse:BinarySecurityToken element contains the actual token data
  • The wsu:Timestamp element contains our requested timestamp, in this case expiring in 5 minutes. Messages sent after the expiration date should fail.
  • The xenc:EncryptedKey element contains information about the key that was actually used to encrypt the message. It contains the token reference for the encrypting key, which in the case of the above message is the public key of the server. It also contains a xenc:CipherValue element which, as I understand it, is a 128-Bit symmetric key, encrypted with the public key of the server. This 128-Bit key is used to encrypt the message; only the randomly generated symmetric key included here is used to encrypt the message. This provides the speed of symmetric encryption coupled with the security of PPK encryption by exchanging random, single-use keys encrypted with the public key of the recipient.
  • It also contains a reference list of elements which it should be used to decrypt.
  • The first xenc:EncryptedData element contains the signature for the message, encrypted with the given symmetric key.
  • The second xenc:EncryptedData contains the body of the message, encrypted with the given symmetric key.

Completed Source

You can download the completed tutorial source here.

Ross Lodge // March 31, 2011

Emulating JRE Classes In GWT

blog-image-6.jpg image

The Google Web Toolkit (GWT) SDK provides a set of core Java APIs and Widgets - speeding the development of powerful AJAX applications in Java that can then be compiled to highly optimized JavaScript that runs across all browsers, including mobile browsers for Android and iOS.

However, when working with GWT, you quickly find that the toolkit’s implementation of the Java API’s are incomplete, and that using types that Google hasn’t provided as translatable will result in a GWT compiler error.

We wanted to be able to use the java.net.URI and java.util.UUID classes in our client-side code, neither of which are supported by GWT. This tutorial describes a method for implementing client-side versions of JDK classes that GWT doesn’t support. Fortunately, GWT provides support for overriding one package implementation with another.

There have been some attempts to implement more of the JDK; for example see GWTx, but they are quite incomplete.

It is extremely useful, then, to have a clear technique for creating client-side versions of some JDK classes that are unavailable. GWT helpfully provides a mechanism for doing this (look down under “Overriding one package implementation with another”), but doesn’t tell you much about how to use it.

I went looking for some examples of how to use the “super-source” XML tag to create some client-side implementations of some java types, and found a couple of incomplete or confusing tutorials:

But, none of these told me exactly how I was to accomplish this.

In addition, we had another problem: we needed to be able to pass the URI and UUID classes back and forth between the client javascript and GWT-implemented services on the server, and none of the above blog posts gave a hint as to how to make the Java and Javascript versions of the classes mutually serializable (GWT uses the standard JDK implementation on the server, but the Javascript override classes on the client).

There’s a mechanism for this, as well, but again I couldn’t find good Google documentation on it. I did find How to use a CustomFieldSerializer in GWT, but it deals with entirely custom classes and not classes meant to act as JDK API classes.

We eventually figured this out and I thought I’d put this tutorial together in hopes that others can do the same if necessary.

Tools Needed

  • Maven, 2.2.1 or greater
  • JDK 1.6 or better
  • Eclipse (optional)

These can be downloaded from the websites of the respective projects.

Project Setup

If you don’t have an existing project, you can use the gwt-maven-plugin archetype to create one:

It will prompt you for group id, artifact id, version, and package names; for the example I used “com.example.gwt”, “SuperSource”, “1.0.0-SNAPSHOT”, and “com.example.gwt”.

You will need a working maven pom.xml file that includes the necessary GWT dependencies and the gwt-maven-plugin; the archetype may give you this, although it doesn’t use the latest GWT versions and in my experience produces some errors. The one I used, for example, looks like this:

Create source directories: src/main/java, src/main/resources, and src/main/webapp. You will need to provide the necessary html, css, and web.xml files for your implementation. The example code linked above includes sample sources for these taken directly from the archetype that the gwt-maven-plugin provides.

Create server-side service interface and implementation, and an EntryPoint class. The ones in the linked source are adapted from the “Greeting Service” that comes with the archetype.

Create a JDK-Emulation Module

Create a new module at the same level, in the resources directory, as your existing GWT module. In the example source, I called it “SuperSourceJre.gwt.xml”, and placed it in the src/main/resources/com/example/gwt directory next to the one created by the archetype. It should look like this:

The path in the super-source tag is arbitrary, but must match the the name of a directory directly under the package or directory where you created the new file; obviously you can also set the “rename-to” value to whatever you would like. What GWT does is take the package specified as source-path, and removes it from the front of the path when compiling anything under that directory. So files under “com/example/gwt/jre/java/net” will be compiled as if they belonged to the package “java.net”. This allows you to create classes that get “renamed” to classes in the core JDK. Note that the sub-path then must match the package of the class in the JDK.

Import this new module in your existing module(s). For example, in the existing “SuperSourceTest.gwt.xml” file, I added:

Create the Emulated Classes

In the attached example source code, I emulate both java.net.URI and java.util.UUID, and I also include some fancy JSNI native code to allow me to generate UUID’s on the client side if necessary. Of course, your needs might be different. The key is that only the methods you implement in your emulation classes are available to the client. Although it will appear as though your java code compiles when you are writing client-side code, it will fail GWT compilation.

The classes you need to emulate should be created in a package-dependent path in the resources directory under where you created your new emulation module. For example, I created the classes:

  • src/main/resources/com/example/gwt/jre/java/net/URI.java
  • src/main/resources/com/example/gwt/jre/java/net/URISyntaxException.java
  • src/main/resources/com/example/gwt/jre/java/util/UUID.java

(The exception is created because the URI constructor throws URISyntaxException, so it also must be emulated.) Note that I put these in the src/main/resources directories instead of src/main/java: this is because Eclipse or other IDE’s and compilers will refuse to compile them as java, since they have invalid package specifications, so I put them in as “resources” so they end up in source and classes directories but nothing but GWT attempts to compile them.

Create the source code for these classes. Ideally, you would implement the entire functionality of the original JDK class yourself, but you at least need to emulate a default constructor and the minimum logic necessary for your client. The signatures of the methods you implement should mirror exactly the ones they are replacing in the JDK itself.

Here are the implementations I used:

At this point, GWT will happily compile these classes, but they can only be used on the client side (server-side code will use the original JDK classes). They can’t be passed as service arguments between client and server because GWT will see them as different because their serialized signatures are different.

Implementing Serialization

GWT allows you to specify custom serialization for your classes. The method is pretty straightforward: you need to create a class in the same package as the class you want to serialize named [Class to Serialize]_CustomFieldSerializer. The name and package must be exact, or GWT won’t be able to find the serialization class.

In the serializer class, you must implement two methods:

  • * serialize
  • * deserialize

You may also implement “instantiate” if default constructor instantiation is not adequate.

For example:

Because we want these serializers available on both the server and client side, they should be placed in src/main/java/… so they will be compiled by both GWT and the java compiler.

This is pretty straightforward, and works fine on the client. But there’s a major problem: the classes we want to emulate are in JDK core packages (java.net, java.util). On the server, the classloader will refuse to load any custom class in core packages (packages beginning with java or javax) as part of the JDK’s core sandboxing. So we can’t place our serializers in the right package.

Apparently this is a problem Google had as well, because their own emulation classes must also often have custom serializers. So they have created a “magic value” package name that their internal API checks for serializers as well as the raw package name. This value is “com.google.gwt.user.client.rpc.core”. The raw package is appended to this, so if GWT were looking for a serializer for “java.net.URI”, they would first check for “java.net.URI_CustomFieldSerializer” and, if none was found, check for “com.google.gwt.user.client.rpc.core.java.net.URI_CustomFieldSerializer” So if we place our serializers in such a package, GWT will find them automatically. Of course, this is internal GWT API to “use at your own risk”, but we haven’t found another way around this problem yet.

So to serialize our own URI and UUID classes, we build custom serializers as (note the src/main/java location and the special package):

  • src/main/java/com/google/gwt/user/client/rpc/core/java/net/URI_CustomFieldSerializer.java
  • src/main/java/com/google/gwt/user/client/rpc/core/java/util/UUID_CustomFieldSerializer.java

These look like:

The code implementation for both of these is pretty straightforward: serialize the object using toString, and instantiate it from the string.

Utilizing the Classes

Obviously you will have your own specific uses of these classes. To test them, I created a simple DTO bean in the archetype’s shared package:

I then modified the archetype’s GreetingService and related implementation, changing greetServer to look like this:

The implementation was changed similarly, and echoes both echoes back incoming URI and UUID entries and generates some new random ones. This lets me make sure that the equals and hashcode implementations work reasonably well.

Then I modified the EntryPoint to use the new method and display the result. If you’re using gwt-maven-plugin to generate the async interfaces you’ll have to do a maven compile to have those interfaces generated before the code below will compile in Eclipse.

You should then compile this using mvn clean install gwt:run to make sure it works correctly in both hosted and production modes.

comments powered by Disqus