When building Node solutions, even if you’re not going to publish the code to a public repository you’re likely to be using package.json to declare the dependencies for your app. Doing this makes it easier to build and deploy a utility. But if you’re conversant with several languages there is a tendency to just adapt your existing skills to work with others. The downside of this is small tooling nuances can catch you off guard and consume time while figuring them out. The workings of packages with NPM (as shown below) is one possible case.
If you create the package.json using npm init to create the initial version of the file, it is fairly common to set values to default. In the case of the license, this is an ISC license. This is easily forgotten. The problem here is twofold:
Does the license set reflect the constraints of the dependencies and their licenses
Does the default license reflect the position you want?
Looking at the latter point first, This is important as organizations have matured (and tooling greatly improved) when it comes to understanding how open source licensing can impact. This is particularly important for any organizations leveraging open source as part of their revenue generating activities either ‘as a service’ but also selling software solutions. If you put the wrong license here the license checking tools often protecting code repositories may reject your code, even in internal only use cases (yes this tripped me up).
To help overcome this issue you can install a tool that will analyze the dependencies and optionally their dependencies and report back on your license exposure. This tool is called license-report. Once installed (npm install -g license-report) we just need to point the tool to the package.json file. e.g. license-report package.json. We can make the results a lot more consumable by outputting the content in a number of formats. For example a simple text value:
From this, you could set your license declaration in package.json or validate that your preferred license won’t conflict,
I’ve designed a variety of GraphQL schemas and developed microservice backends. But not done much with configuring the Apollo implementation of a GraphQL server until recently. This may reflect the fact my understanding of JavaScript doesn’t extend into the world of Node.JS as much as I’d like (the problem with being a multi-language developer is you’re likely to find your way around many languages but never be a master of one). Anyway, the following content is about the implementation within a GraphQL server part of a solution. It may be these pointers are just for my benefit you might find them helpful as well.
To make it easy to reference the code, we’ve added entries (n) into the code, where n is a number. This is not part of the code. But there to make the different lines referenceable. Where code should go but is not relevant to the point being made I’ve added ellipsis (…)
Dynamic loading and server configuration
import { ApolloServer } from 'apollo-server';
import { loadFilesSync } from '@graphql-tools/load-files';
import { resolvers } from './resolvers.js'; (1)
import ProviderInternalAPI from './ProviderInternalAPI.js'; (1)
import EventsInternalAPI from './EventsInternalAPI.js'; (1)
const server = new ApolloServer({
debug : true, (2)
typeDefs: loadFilesSync('./schema.graphql'), (3)
resolvers,
dataSources: () => {
return {
eventsInternalAPI: new EventsInternalAPI(), (4)
providerInternalAPI: new ProviderInternalAPI() (4)
pro
};
}});
There is the potential to dynamically load the resolvers rather than importing each JavaScript file as we see on lines (1). The mechanics to do this is documented here. It would be cool if an opinionated implementation was provided. As shown by (3) we can take a independent schema file being loaded. The Apollo example approach for this didn’t seem to work for us, although both approaches make use of graphql-tools in a synchronous manner.
We can switch on debugging (2) for the GraphQL server, although the level of information published doesn’t appear to be significant. Ideally this setting is changed for production.
Defining the resolvers
The prefix for each resolver (1) must correlate to the name in the schema of the mutator or query (not the type as you would expect with Java). Often we don’t need all the parameters for the resolver. The documentation describes replacing each unused parameter with one or more underscores (i.e _, __ ). The underscore denoting the field not in use. However we can satisfy the indication of not being used, but keep the meaning of each position by using the underscore then a name (i.e. _parent, _args ) as shown in (2).
By taking the response into a variable (3) we can optionally log it. Trying to return using invocation line would result in the handler object rather than the payload itself. By taking the result into a variable we can log the content if desired and return the content.
The use of the backward quote is a node feature. It allows us to incorporate variables into a string by referencing it within ${}(4).
We need to supply the GraphQL server with instances with a layer of code that will interact with the resolvers. We can instantiate the instances in the declaration. The naming of the object is important (4) to the resolver.js (declarations).
import { useLogger } from "@graphql-yoga/node";
...
latestEvent (1): async (_parent, _args, { dataSources }, _info) (2) => {
if (log) { console.log("resolvers - get latest event"); }
let responseValue = await dataSources.eventsInternalAPI.getLatestEvent(); (3)
if (log) { console.log(`(4) Resolver response for latest event:\n ${responseValue}`); }
return responseValue;
},
To handle the use of resolvers within a larger resolver we need to declare the resolution outside of the Query and Mutator blocks (but inside the whole declaration block)(1). The name provided needs to match the parent entity that the query resolver contributes to.
To then provide values from the outer resolution we need to prover to the chained resolution use the naming as represented in the GraphQL schema as shown by (2). The GraphQL engine will resolve the mapping values.
Web resolver URL
// GET
async getProvider(code) {
console.log("getProvider (%s) directing to %s",code,this.baseURL);
return this.get(`provider?code=${code} (1)`);
}
The URL parameters need to be appended to the base URL path for the parent class to use in the invocation as shown by (1). The Apollo examples showed a setter option but we didn’t see the URI being addressed properly. This approach produces the relevant requirement.
Oracle’s product portfolio is significant, from databases (obviously) to GraalVM to a cloud platform capable of competing with GCP, AWS, and Azure. This means locating the Oracle-provided plugins, or community ones can get messy. Depending on your perspective Oracle Developer Plugins could relate to Java and GraalVM or Oracle Database.
As broad as the portfolio, is the Oracle details regarding the plugins. So the following two tables represent what we’ve identified as Oracle-provided tooling, and the second table of plugins we’ve used when working on Oracle-based solutions from the community.
Name / Plugin Search
Description / Additional Details
Related resource links
search:Oracle Labs This will return all the Oracle plugins related to GraalVM
SuiteCloud Extension for Visual Studio Code is part of the SuiteCloud Software Development Kit (SuiteCloud SDK), a set of tools to customize your NetSuite accounts.
Github actions is a means by which actions like commits to github trigger external infrastructure to perform actions such as creating application binaries.
Low level aka detailed design documentation has been an interesting point of debate. We range from the agile manifesto which states focus should be on working code over documentation and people using this as an argument for not producing documentation. On the other end of the spectrum as someone working for a large SI, documentation is more often than not contractually binding and accurate docs are key when taking work from a different supplier organization.
It is clear that documentation is an essential element. But I do agree with the agile manifesto, a business operates on its software, not documents, although the docs help us keep the software maintained and running.
How do we balance the age-old conflicts of …
Documents get out of date because they are kept separate from the implementation
Documents, particularly when rushed don’t provide the information necessary
Document templates having sections used a tickboxes rather than guide rails
Making sure we’re working with the most upto date document
Possibilities
One of the key issues for documents getting out of date is a compound issue of accessibility, visibility and ease of maintenance. These compound to separate the documentation from the reality of code and configuration.
This can be eased by bringing documentation to being ‘physically’ closer to the code as we often see with readme markdown files on GitHub for example. But we can get closer with quality code commenting, particularly for each package and module. Just about every code or notation format had its own document generator from well-proven Javadoc to Terradoc for Terraform. To illustrate my point here are a couple of examples:
If done well the documentation can be generated and deliver the right information. It would mean in structured change management the change task for the code includes the documentation. The ideas behind combining code and documentation can be seen with good API Blueprints.
When you still need to produce publishable documents, you have the opportunity to stitch multiple class and package generated docs together using tools such as pandoc. Arguably it would be the developer’s job to establish the pandoc configuration file (Documentation as Code).
You can add to this if done carefully, by adding diagrams such as UML representations. Importantly this process can generate representations that include lots of detail that would be noise to the key representation (time for tools like Sparx to support annotations that can give hints as to what to show in a generated model).
Pitfalls
The biggest risks of this approach are:
People paying lip service to documenting code, or using the argument that agile means no documentation (an age old misrepresentation).
Comments reflect the code correctly
Assuming the documentation will be clear because it is writte6x n
These pitfalls could be in theory be addressed through some smarts such as comparing the volume documentation generated against the number of lines of code and code complexity metrics.
But like many things, good culture and good application of principles are essential.
Exploring further
There are growing dedicated resources in this space, check out:
When it comes to development, we have had coding standards for almost as long as we have been coding. We tend to look at coding standards for purposes of helping to promote good quality code and reduce the likelihood of bugs and so on. But they also help with readability, making it easy to navigate a code base and so on. This is sufficiently important that there is a vast choice of tools to help us ensure we align with good practices.
That readability etc, when it comes to code interfaces lends to making it easier to use an interface as it promotes consistency and as Don Norman would say avoids ‘cognitive load‘, in other words, the effort involved in performing actions with the interface. Any Java Developer will tell you, want to print out an object (any object) you get a string representation using the .toString() method and direct it using the io packages.
That consistency and predictability are important not just for code if you look at any API best practises documents you’ll encounter directly or indirectly the need to use conventions that drive consistency – use of singular or plural for the name of entities, application of case – camel case, snake case etc. Good naming etc and we’ll see related things appear together in the documentation. Products such as Apiary and SwaggerHub include tooling to help police this in our API design work.
But what about policies that we use to define how an API Gateway handles the receipt and routing of API invocations? Well yes, we should have standards here as well. Some might say, governance gone mad. But gateways are often shared services, so making it easy to see and logically group APIs together at very least by using a good naming convention will help as a minimum. If API management is being administered in a more DevOps fashion, then information security professionals will probably want assurance that developers are applying policies in a recommended manner.
As APIs become more pervasive within our solutions we see the arrival of not just design and cataloging tools such as Apiary, Apigee and others but also the arrival of gateways. The gateways provide execution of operations including validation, accounting (moneytization), routing, and other controls such as throttled checks that would often not occur until the first contact with a service bus. For example initial routing based on the API call, fine grained authentication and authorisation (differing from your firewalls who will just perhaps authorise).
In the more traditional integration middleware of Oracle Service Bus and SOA (regardless of cloud, on-premises) you can trace the execution through the middleware end to end. This tracing can be achieved because the platform creates and assigns the a UUID (aka eCID) and ensures that it is carried through the middleware. It is this very behaviour that allows Oracle to provide Business Activity Monitoring without any need to be invasive. Burt not only that in a highly distyributed environment you can track the processing of a transaction from end to end.
The challenge now is that the first point of middleware style behaviour can be at the gateway. So we actually need to move forward the UUID or equivalent forward the the first point of contact. Not only that we need to think about the fact we will see non Oracle integration middleware involved. Within Spring, Kabana and other established frameworks and tools which are getting signficiant traction with the rise of Microservices, the Ids being used are not the same as the UUIDs used by Oracle. For example Spring Cloud Slueth uses the same HTTP header Ids that Zipkin and Kabana support:
For the new Oracle API Platform Cloud Service we can check for the existance of the header attributes as a policy and apply actions such as:
Apply a header with a TraceId or spanId,
If a SpanId exists then we may wish to nest our call as a child span by moving the SpandId to the ParentSpanId and creating a new SpanId.
Ultimately it would be more attractive to apply the logic using the API Platform’s SDK but to get things rolling applying the IDs with the API Groovy policy is sufficient (more here).
Next question that begs, is where to put the ID and want to call it? Put the value in the body, and you’re invading the business aspect of an API with execution specific details, not to mention potentially changing the API definition. Whilst stuffing HTTP(s) headers with custom attributes is often discouraged as the values aren’t to immediately visible. In my opinion at least the answer largely has been set by president for us. If you weren’t using HTTPS but JMS then you would use the header, but also a number of frameworks already exist that do make use of this strategy such as those mentioned above.
Using the header defined by Kabana etc means that the very process will mean that a number of Support tools out of the box will understand and be able to help visualise your logs with no additional effort.
The following is a Groovy script that could ber used for the purposes of applying an Id appropriately into the HTTP header in the API Platform:
if (!context.getApiRequest().getHeaders().containsKey ("baggage-UUID "))
{
// use current time to seed random generator
def now = java.time.Instant.now()
def random = new java.util.Random(now.getEpochSecond())
// create an array of 16 bytes to hold the random value
byte[] uid = new byte[16]
random.nextBytes(uid)
// convert the random string from Hex to an ASCII string
Writable uidInHex= uid.encodeHex()
String uidStr= uidInHex.toString()
// set the outbound header
context.getServiceRequest().setHeader("baggage-UUID", uidStr)
}
When configuring API Policies in the the Oracle API Platform it helps if there is a simple back end that can take the received payload and record the sent values (header & body) as well as reflect the call details back as the response, or possibly respond with a test payload (so that response policies, particularly policies that require payload navigation can be exercised correctly). By having this facility it becomes a lot easier to determine whether the policies are executing correctly in terms of routing, transforming, filtering etc. without needing to worry about whether the API implementation is correct. You could say that this is a kind of mock for testing the API Platform.
The added benefit of having a mock back end is that it is easy to ‘smoke test’ a gateway deployment very easily. Particularly if the mock is happy to receive any form of call.
Whilst implementing such a capability can be done in pretty much any language and platform you like. We have in the past for example built a Springboot Java application that can have the dependencies configured to then deploy into WebLogic for example. We have come to refer these test apps/mocks as PlatformTests as that’s exactly what they help do. A Node.js implementation of a PlatformTest such as as the following implementation is particularly appealing as the Node.js footprint is small and simple to deploy and undeploy. A basic Node.js implementation can also consume any URL and operation you choose to use. The nature of JavaScript makes it very quick to adapt the mock if need be. Although in the ideal world, we write the solution once and then use simple configuration to tune behavior.
The following code looks for a local file called testResponse.json if found then returns the content of the file (assumed to be JSON) otherwise it reflects back in the body, the received header and body. This reflection makes it extremely easy to see how the policies have changed the inbound call. The content is also logged to the console – making it easy to also see what came through to the back end.
The implementation also assumes port 8080, but changing the port is exceptionally easy.
There one enhancement planned, and this is to allow the response test payload to be handled as XML. This will need a little tweaking of the code as presently a JSON Object is currently stringified.
const http = require('http');
const fs = require('fs');
// create a simple HTTP server that will handle the requests
http.createServer((request, response) => {
const { headers, method, url } = request;
console.log("Called at " + new Date().toLocaleDateString());
let body = [];
request.on('error', (err) => {
console.log("Svr Error Handler :" + err.toString);
response.statusCode(400);
response.end();
}).on('data', (chunk) => {
body.push(chunk);
}).on('end', () => {
body = Buffer.concat(body).toString();
// At this point, we have the headers, method, url and body, and can now
// do whatever we need to in order to respond to this request.
});
// record in the console what details have been received
console.log ("Received:\nMethod:" + method.toString() +
"\n URL:"+ url.toString + "\nheaders:\n"+headers.toString() +
"\nBody:\n" + body);
// now build the response
response.setHeader('Content-Type', 'application/json');
response.setHeader('PlatformTestTime', new Date().toLocaleDateString());
// initialise our response object so that if we don't load a response
// file then we reflect the content
var responseBody = { headers, method, url, body };
try {
// try reading a response file
fs.readFile('testResponse.json', function(err, data) {
console.log("handling file");
if (err != null) {
if (err.code === 'ENOENT') {
console.log("on return file - will reflect");
} else {
console.log("Read error:" + err.toString());
}
} else {
// a file exists - but is empty?
if ((data != null) && (data.length > 0)) {
// we have a file with content - lets process so it into a JSON
// object
if (Buffer.isBuffer(data)) {
// convert the buffer from hex to an ASCII string
body = data.toString('utf8');
console.log("test response:" + body);
responseBody = JSON.parse(body);
}
}
}
// create an array with our values and then make it
// JSON with stringfy
var output = JSON.stringify(responseBody);
response.write(output);
console.log("Returning:" + output);
response.statusCode = 200;
response.end();
});
} catch (err) {
if (err.code === 'ENOENT') {
console.log("on return file - will reflect");
} else {
console.log(err.toString());
}
var output = JSON.stringify(responseBody);
response.write(output);
console.log("Returning:" + output);
response.statusCode = 200;
response.end();
}
}).listen(8080); // Activates this server, listening on port 8080.
You must be logged in to post a comment.