0

I have an RDS SQL Server configured for which I need to get all successful/failed login attempts and create a report out of it (csv file) and upload it to S3 bucket.

I am using below queries to get both successful and failed logins:

# Fetch failed login attempts
SELECT * 
FROM msdb.dbo.rds_fn_get_audit_file ('D:\\rdsdbdata\\SQLAudit\\*.sqlaudit', default, default ) 
WHERE action_id = 'LGIF';
 
# Fetch successful login attempts
SELECT * 
FROM msdb.dbo.rds_fn_get_audit_file ('D:\\rdsdbdata\\SQLAudit\\*.sqlaudit', default, default) 
WHERE action_id ='LGIS';

I need to do it through node js based lambda function in AWS. How can I do it? I am totally new to node js and could not find any example.

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
tavier
  • 1,744
  • 4
  • 24
  • 53
  • Here is a great example of how to achieve most of what your looking to do: https://stackoverflow.com/questions/33165045/aws-lambda-and-sql-server-integration – vvvv4d Aug 17 '20 at 15:48

2 Answers2

1

You'll need to write a lambda that can talk to SQL, REF: AWS Lambda NodeJS call to SQL Server returns no data and no errors

var sql = require("mssql");

// config for your database
var config = {
    user: 'xxuser',
    password: 'xxxx',
    server: 'mydns', 
    database: 'tavier' 
};


module.exports.rdsquery = async event => {

console.log('called rdsquery')
  try{
    // connect to your database
    await sql.connect(config, function (err) {

      console.log('connected')
        if (err) 
          console.log('rdsquery: '+err)

        // create Request object
        var request = new sql.Request(); 
        // query to the database and get the records
        request.query('SELECT * 
                      FROM msdb.dbo.rds_fn_get_audit_file 
                      ('D:\\rdsdbdata\\SQLAudit\\*.sqlaudit', default, default ) 
                      WHERE action_id = 'LGIF'', function (err, recordset) {

            if (err)
            console.log('rdsquery-sql: '+err)   
            
            // send records as a response
            console.log('logging recordset')
            console.log(recordset);
            return {statusCode: 200, headers: {'Access-Control-Allow-Origin': '*'},body: JSON.stringify(recordset)};
           
        });
    });
  }
  catch(e)
  {
    console.log('rdsquery-catch: '+e)   
    return {statusCode: 200, headers: {'Access-Control-Allow-Origin': '*'},body: JSON.stringify('ERR: '+e)};
  }

  //return {statusCode: 200, headers: {'Access-Control-Allow-Origin': '*'},body: JSON.stringify('test here')};
};

Instead of returning the recordset, save it to S3, REF: https://stackoverflow.com/a/40188305/495455

var AWS = require('aws-sdk');
function putObjectToS3(bucket, key, data){
    var s3 = new AWS.S3();
        var params = {
            Bucket : bucket,
            Key : key,
            Body : data
        }
        s3.putObject(params, function(err, data) {
          if (err) console.log(err, err.stack); // an error occurred
          else     console.log(data);           // successful response
        });
}
Jeremy Thompson
  • 61,933
  • 36
  • 195
  • 321
0

I assume your file is generated on your own server not in the cloud. IMHO don't see the reason to use a lambda as the goal is just to upload a file into location and here is how it could be done without lambda.

  • Lambda and your SQL Server possibly will not be in the same cloud
  • Extra effort to setup lambda and posting data which is only moved to s3

What you need is :

  1. a script to generate the report

  2. a shell script upload-report-s3.sh

    • Use aws cli to upload generated file aws s3 cp ./path-to-report-file.csv s3://your-bucket/extra-path/target-location.csv
  3. a cron job to integrate point 1) and 2)

Note: S3 bucket is already created

Traycho Ivanov
  • 2,887
  • 14
  • 24