I have another public blog posted out in wild west of the internet,Check out the Packt blog here. It looks like a couple of typos slipped through the net though 😦
The request to present came late as we where needed to cover someone who had to cancel (not that we aren’t grateful for the opportunity). This did mean getting the presentation together was a little bit of a scramble, unfortunately I missed a couple of sessions as I needed to assemble an environment, work out how I wanted to explain the point Luis’ slides where communicating as this was the first time presenting with Luis as a double act. Add to that address the day to day work demands.
Despite these challenges, I think the presentation went very smoothly (and we’re looking forward to receiving the feedback). The slides can be found here …
I did catch a few presentations, including the keynote by Adam Bien, Tim Hall‘s presentation on exposing databases using REST services, Lucas Jemella‘s microservices and eventing backbone and finally CQRS by Sebastian Daschner. All presentations where all top notch, loaded with useful information. I’ve been fortunate to see both Lucas and Tim presenting before so knew I would in for a really good presentations. So if you ever want to know about Oracle DB stuff with practical honest insights I’d recommend looking Tim up. Like wise in the middleware space for Lucas.
Seeing the presentations and different presenting styles was interesting. Those presenters with a Java Rockstar background vs those from an Oracle Ace background. The Java guys taking a very minimalist (if any) slides and all code / demo – but blink and you’ll miss it, where as the Ace community (of which I am fortunate enough to be a member) with slides that are often visually very strong and still supported by demos.
Whilst I’ve attended Oracle Open World, I’ve not yet seen the parrallel Java One conference in San Francisco. That said, the feel of the day’s event (and presumably the goal) is what I’d expect Java One to be like. I have in the past attended similar RedHat events, whilst the venue has a similar feel (not unsurprising as both have used SkillsMatter venues), what was different between the Oracle and RedHat events was that the atmosphere felt a lot friendlier and communial at Oracle Code. This maybe down in part to the fact that I know more of the people both Aces and Oracle employees, although that can’t be the only reason as when I was involved in the RedHat environment I had known senior people within the organisation and encountered presenters.
My last observation, more technical is the fact that JavaEE was mentioned a lot more than I’d expected, even those much maligned EJBs got a mention. Is JavaEE making a reassurgence?
So, if you get a chance to attend OracleCode – as an architect or developer I’d recommend that you take the opportunity. Whilst Devox maybe bigger with the really big name speakers, the day was both informative, engaging and rewarding.
We have just supplied our publisher with the final draft of the final chapter in our book about Oracle Integration Cloud Service (ICS). Before we get too chilled out waiting to see the printed article as Packt Sort out the final publishing I thought it might be helpful to share some observations from our experiences.
Let’s start with some background. I have been acting as a peer reviewer for Packt for some years now, and in fact Packt had approached me in the past to write a book, however I had declined their proposals as I didn’t want to write on a subject that people had already written about. So when I was introduced to ICS, this felt like a good subject to write a book on, certainly represents something that it is going to have a significant future and deserved a book to help people get beyond a basic user guide.
Choosing to write a book is not a small undertaking, so make sure you’re going to do this for the right reasons. Let’s be honest, very few books make much money. You have to be lucky, writing a subject you know us going to be game changing (think Gang of Four and Thomas Erl) or have a definitive text on the next big thing that everyone will use. Publishers also run promotions, discounts and give aways, some more than others, but that will all eat into you share, not to mention unless you self publish or you’re a rock star author you will not see big percentage royalty rates coming your way.
So first steps, for us was to get a publisher on board, given it was an Oracle product I wanted to talk to Oracle Press (or here) first which is run by McGraw-Hill. They weren’t too sure about the idea, having not been successful with previous cloud books. So we went back to Packt, they do have Oracle based books, and I had a relationship there.
With some initial positive feedback, I needed to get things moving. Thinking through this I concluded that the entire book alone could be a lot of very hard graft when working with a new product and I didn’t have the access to the same level of resources working for a customer organisation as you can with an Oracle partner company. So I needed a co-author who was involved with ICS, and ideally working for an Oracle partner. I had seen Robert van Mölken blogging about ICS, and working for AMIS suggested he would be a very capable person, not to mention AMIS is a respected partner. Robert has shown himself to be more than capable, and getting him signed up to the idea was a good call.
Next, was to start properly developing the idea, which means chapters, subheadings, and book introduction. Very quickly the chapters and subsections where finalised, along with our approach to the examples. I was very keen that the examples where routed in plausible scenarios and that would help the ideas without getting caught up explaining the detail. Not to mention the examples should feel less superficial. Additionally, we have recognised that a book about a cloud solution means that things will move far faster than something that is deployed on-premises, so how we approach this book needs to hold true and relevant even if there are new features and aesthetic changes for a good while.
We divided the work up between us, I think Robert in hindsight took on the more troublesome chapters, in so far needing to understand more social APIs. So when plotting out the division of work, also think about the technical challenges you might have and need to explain. Whilst you won’t have this in perhaps a ‘hello world’ level of functionality when you past this effort builds up, if you’re working on a cookbook it may we’ll be an important factor.
Our original goal had been to publish in time for Open World. But the realities of a job, both of us being active with events such as user groups meant these things would eat into the available writing time as demos, presentations would also need to be written. We also uncovered a couple of bugs that delayed things, both in waiting for the patch, but also confirming that what we where seeing was a bug, and not an issue of understanding.
In hindsight I think perhaps we should have done more work during the planning to build the example scenarios. There is no doubt that planning before actually writing makes a significant difference. It would have given us more time in working through the questions and challenges. The risk would have been that it would have been a lot longer before we actually produced some content, and there is certainly something psychological about getting those initial chapters written.
During the core writing phase Robert and I would gave weekly call to catchup, it meant that we could discuss the chapter scenarios, details, and assumptions that meant we were aligned. Whilst not necessary, and this could be done be email, a short conversation was a lot easier and it helped keep focus. Not to mention probably reduced the differences in writing that can occur with different authors.
When comes to the writing itself I found the clearer my thinking was on the specific points I wanted to convey the easier the writing became and the writing of the chapter just flowed. The question I still haven’t really answered in my own mind is whether I should have been a lot more attentive to the formatting the publishers wanted us to adopt, applying it retrospectively took a couple of passes as you would spot something that had omitted the correct style. But diligently applying the right styles would have been disruptive to the writing style.
We found that most chapters overran the page count by about 10%-15% the publisher was pretty cool about this – they definitely agreed a good book over a book that was edited to a specific length was most important. We can put the over run down largely to the fact we didn’t allow for the formatting of the page, which meant more white space that we had anticipated, plus in the drafts we needed to put additional publication notes in such as references to the images being used. It is worth looking at this before finalising your chapter lengths.
The last thing we did during the writing of the 1st drafts was reviewing each other’s work before submitting the chapters. This probably helped a lot in so far as Robert would often pickup on issues with my screenshots and I would tend to finesse wording – when you write in a more conversational style those little quirks of speaking can come through.
Completing all the chapters in 1st draft felt pretty satisfying, and certainly a morale boost as we had overrun our original estimates, as it meant we where we we’re well over 50% complete, probably nearer to 75% complete in terms of effort. In the contract with Packt this was also the 1st milestone for the advance, which is a long way into the process and the payment has yet to be received. Some of this delay has been organisational, but things don’t happen quickly on that front.
Before we started the project one of the Oracle Ace Directors we knew provided some observations, suggesting that each page will probably take a couple of hours to write. I have to admit to being a little sceptical of this as it would mean roughly a year of writing every evening for both of us if you look at it from an elapsed time it isn’t so far from the truth. If you look at the actual effort, those weeks where I was just working on the book rather than presentations or work demands. I think it would have been fair to suggest about 8-12 hours of effort went into the book each week which is about 450 pages in length. In the end I think we probably where writing at twice that speed if you measure effort from 1st to last draft.
Second draft is about addressing the review feedback from the peer reviewers. For us that was pretty straight forward,the feedback we received was very positive and making suggestions on how to improve things. As we wrote about a cloud product that is developing and improving quickly we needed to double check the screens hadn’t changed. We did see 1 challenge in the reviewing. We wrote the Preface to help provide context to the book, but I didn’t get sent before the 1st chapters went to review some comments as a result perhaps weren’t so intune with the books underpinning goals. Should the reviewer need to have had the preface first, debatable. We took advantage of this lesson, to reduce the dependency on having read the preface.
Most changes where about fixing formatting, then adding a couple of additional screenshots and some clarifying text. Each chapter probably only needed 1 additional paragraph per chapter. So working through this was pretty quick. Then it has been over to the publisher to finishing things off and assemble the book.
Going forward, we will continue to write additional material, initially for the blog (oracle-integration.cloud) but we are exploring the idea of a living book where the book version will undergo quarterly updates. But time will tell as to whether this makes a difference.
The book can be found at:
My presentation for UKOUG Tech 16 can be seen by following the link – Introduction to SOA CS. or see below It was a tremendous 4 days (if you include the Tech stream’s Super Sunday). If you are a UKOUG member and didn’t make it to the conference I’d look out for the material to be become available.
Whilst I’m not a big Apex fan (stitching business logic into the persistence layer feels wrong to a middleware person), i did attend the keynote session which covered Apex’s history and future direction, and there are some very exiciting things coming and if everything materialises as I understand it then some big steps to getting developers engaged with Oracle cloud offerings.
Oracle has done a lot of work on the middleware layer with apps container (using common Docker configurations without needing to worry about Docker), Kafka, Node.js and others to engage developers and provide the means to offer a polyglot microservices platform that is not just attractive to the traditional Oracle customer base but also those wanting the middle ground of supported open source. What Oracle are missing is the means to get developers trying the technology and being creative with it. Amazon and Red Hat have got this by offering limited footprints for a long time. Oracle offer 30 day trials which is fine to do a project sponsored PoC. But to hook grass roots users you need a lengthy period where people in spare time can built some cool/geeky solutions.
Now this maybe down to the fact that Oracle cloud is built on their Exa machines with clever on silicon security features, and Oracle can’t manufacture it quick enough. Whereas other cloud providers work with largely commodity components. But if they want to challenge Amazon as Ellison says they need to change this.
The background to this post, and the OTN Appreciation Day can be seen at Oracle-Base.
Oracle Messaging Cloud Service (OMCS) is I think an overlooked gem of Oracle iPaaS portfolio. I say this as it offers a JMS 1.1 compliant Java library but at the same time provides a means through which integration through REST APIs can be performed. This means it is possible to pretty transparently connect legacy JMS based integrations with new REST based products. The magic sauce (and therefore my favourite feature) is the concept of the Push Listener. Through the REST API it is possible to register a REST URL as a target for queues and topics to have messages sent to. Once registered when a message appears on the queue or topic it will get passed on as a REST call. Whilst is is possible to do with with a little bit of Java code. the Push Listener simplifies the job to a REST call with a bit of XML configuration.
There is one small challenge that makes the integration completely transparent to the recipient of the PushListener today, and that is it currently demands that authentication process take place on initial contact. This is not a complicated or challenging thing to address, but does require a tiny bit of code to address.
We have taken our book on ICS into Packt Publishing’s Alpha programme so that if you order the book now – you can see the chapters as soon as they have received editorial approval and the complete final book will be made available to you as soon as we’re have addressed all the feedback, made any final improvements we have identified once we have completed the book’s draft.
The book can be found on the Packt website – here
Details about the Authors from Packt can be seen at:
For those who have been using the Application Integration Architecture on top of Oracle SOA Suite, will probably know that Oracle have sunset AIA as of 12c. For 12.1 there are Core Extensions to help transition onto the 12c platform but 12.2 leaves these behind.
One of the more valuable parts of AIA for many has been the prebuilt but extensible canonical data model, which are then used by the Prebuilt Integration Packs (PIPs). Having a ready built canonical form can save an enormous amount of effort (consider the amount of effort invested by OASIS and other standards bodies to define standardised data definitions).
So with AIA not moving forward and the canonical form (i.e. XML Schema) no longer being maintained. The question begs how to move forward? Well given that the model is represented by XML schema you could harvest the schema from an 11g environment, package them up and deploy them in a standalone manner in a 12c environment. Whilst this will work, it does mean that the data model wont have any future evolution other than by home grown effort.
Depending on your commitment to the AIA model, there is another option to adopt another prebuild form. I know as result of talking with several other Oracle AIA customers that people are adopting OAGIS. This isn’t surprising as they have similar characteristics in the way to extend, the way the definitions are defined and structured etc. Not to mention some common ancestry. However if you have a significant level of utilisation moving to a new model is potentially going to have a significant level of impact.
As we have also elected to go the OAGIS route (but fortunately are fairly youthful in our adoption so have elected to switch quickly for all but a couple of objects types. Given this, I periodically check in with the OAGIS website to come across the following:
Oracle Enterprise Business Objects Contributed to OAGi
We are very pleased to announce that Oracle has contributed their Enterprise Business Objects (EBOs) and associated IP to OAGi!
The Oracle EBOs are based on OAGIS BODs from a past release and no longer supported by Oracle so they contributed them to us to harmonize with the current version of OAGIS and preserve a technology path for EBO customers.
This also gives OAGi an opportunity to further improve OAGIS content and scope.
I take this as proof of Oracle’s commitment to Open Standards and plan to say so in a press release. I personally thank Oracle for this commitment.
Scott Nieman of Land O’Lakes will be presenting his Project Definition to begin the process of harmonization on Friday, June 3, at 11 AM EDT at the Next meeting which, as members, you are all invited. Please let me know if you don’t have an invitation and I will forward it to you.
Please join me in thanking Oracle and also please try to engage in our harmonization process to improve OAGIS.
So the basis of this is that OAGIS will gain greater coverage of their domain views. But additionally Scott Nieman will be blazing the way to easing the migration path. I have been fortunate enough to meet and talk with Scott at Oracle Open World and it will be worth keeping an eye out for his findings.
Whilst working on our book about Oracle’s Integration Cloud Service I looked around to see what options are available for FTP based services, below is a list of the services, we can’t testify to the quality etc of the service but might be easier than a Google search and ploughing through the results as FTP does occur a lot even in services that don’t support the standard.
What was interesting was none of the major document collaboration platforms offer an FTP based view onto their platform, but rather push an API. Whilst it is clear that FTP wouldn’t provide all the richness of the capabilities of Dropbox, Box, One Drive etc it is as a standard so universally supported that it would mean you could have the most common use models supported from just about anywhere without needing to install a proprietary app or writing code against an API. It would be interesting to see how how such capabilities could impact areas such as IoT.
Fortunately ICS includes social adapters that allows it to connect to these social platforms. But, we still need an easy FTP server to help show how to use FTP (it is still used heavily win closed ecosystems), so here is the list:
When you’re using SOA Suite to run round the clock services you need to give a fair bit of thought to your deployment configuration so it becomes possible to perform rolling patches and other maintenance tasks not only to SOA itself but all the way down to the hardware – and at the low levels you have no control on the maintenance process. Although it is very easy to think that the moment you’re using PaaS that these problems are taken care for you, life isn’t as simple as that.
Oracle cloud services typically go through a patching process once a month and usually within a defined 8 hour period on a Friday night. During this period you may lose the use of your servers as the maintenance is performed within a particular availability zone. In an ideal world this would be a rolling process so you don’t lose everything at once. If the maintenance window is used to to deploy SOA Suite patches then although you will be told of the maintenance window you actually wont have an outage, but post the maintenance window your cloud dashboard will have the option to apply the patches at a time that best suits you. Not only that the patch application process is smart enough to apply it in a rolling manner as the Weblogic nodes in the cluster will have information on each other which the patch mechanism can utilise.
So where is the problem. It is very easy to forget that the PaaS platform is virtual, this means the virtualization platform being software will inevitably need patching whether that is for bug fixing, addressing security requirements or adding new capabilities. These kinds of changes today will trigger a service shutdown. Let’s be honest when trying to balance a rolling change and maximise PaaS client density is going to create a monumentally complex problem, so simplicity and and speed of roll-out suggests a small outage is easier. So how do I therefore assure I can maintain a quality of service if I accept this as a necessity?
Well the answer is pretty much the same as an on premise reference architecture. Have SOA with its supporting databases running in a second availability zone that will have a different patch time. This is going to push up the cost as you’ll need a database with Dataguard. Assuming an active-passive model across your centres, as you approach the maintenance window you’ll get your load balancer to route work load to the second location and let the existing workload run dry on the servers due to go through the maintenance process. Then after the maintenance window you’ll reverse the process.
The current gotchya with this is that you pay for SOA by the month so you in effect have to run two clusters, although hour and daily models are coming.With the hourly model you can have the second availability zone ready for use by keeping the DB alive there, but only startup the SOA instances on the hourly rate when you know the maintenance window is going to occur and it is clear there will be an infrastructure impact.
The other sticky point, is presently as the period allocated is upto eight hours, your second centre needs to be running in a timezone with atleast 8 hours difference (allowing time to fail back). This would mean if you are using the Amsterdam or Slough locations your second location is going to West coast US or Asia Pacific once live later this year or Japan. All of which will present serious issues regarding personal data.
I have been told that some signficiant customers have accepted the situation on the basis the downtime in reality isn’t frequent and correlates to low business periods. But I suspect competition and customer demand will force this to change.