12.4 C
Canberra
Tuesday, October 28, 2025

Constructing end-to-end knowledge lineage for one-time and complicated queries utilizing Amazon Athena, Amazon Redshift, Amazon Neptune and dbt


One-time and complicated queries are two widespread situations in enterprise knowledge analytics. One-time queries are versatile and appropriate for immediate evaluation and exploratory analysis. Advanced queries, then again, confer with large-scale knowledge processing and in-depth evaluation primarily based on petabyte-level knowledge warehouses in huge knowledge situations. These complicated queries sometimes contain knowledge sources from a number of enterprise methods, requiring multilevel nested SQL or associations with quite a few tables for extremely subtle analytical duties.

Nevertheless, combining the information lineage of those two question sorts presents a number of challenges:

  1. Variety of information sources
  2. Various question complexity
  3. Inconsistent granularity in lineage monitoring
  4. Totally different real-time necessities
  5. Difficulties in cross-system integration

Furthermore, sustaining the accuracy and completeness of lineage data whereas offering system efficiency and scalability are essential issues. Addressing these challenges requires a fastidiously designed structure and superior technical options.

Amazon Athena gives serverless, versatile SQL analytics for one-time queries, enabling direct querying of Amazon Easy Storage Service (Amazon S3) knowledge for speedy, cost-effective on the spot evaluation. Amazon Redshift, optimized for complicated queries, gives high-performance columnar storage and massively parallel processing (MPP) structure, supporting large-scale knowledge processing and superior SQL capabilities. Amazon Neptune, as a graph database, is right for knowledge lineage evaluation, providing environment friendly relationship traversal and complicated graph algorithms to deal with large-scale, intricate knowledge lineage relationships. The mixture of those three providers gives a robust, complete answer for end-to-end knowledge lineage evaluation.

Within the context of complete knowledge governance, Amazon DataZone gives organization-wide knowledge lineage visualization utilizing Amazon Net Providers (AWS) providers, whereas dbt gives project-level lineage by mannequin evaluation and helps cross-project integration between knowledge lakes and warehouses.

On this submit, we use dbt for knowledge modeling on each Amazon Athena and Amazon Redshift. dbt on Athena helps real-time queries, whereas dbt on Amazon Redshift handles complicated queries, unifying the event language and considerably decreasing the technical studying curve. Utilizing a single dbt modeling language not solely simplifies the event course of but additionally routinely generates constant knowledge lineage data. This strategy gives strong adaptability, simply accommodating modifications in knowledge constructions.

By integrating Amazon Neptune graph database to retailer and analyze complicated lineage relationships, mixed with AWS Step Features and AWS Lambda features, we obtain a totally automated knowledge lineage era course of. This mixture promotes consistency and completeness of lineage knowledge whereas enhancing the effectivity and scalability of the complete course of. The result’s a robust and versatile answer for end-to-end knowledge lineage evaluation.

Structure overview

The experiment’s context includes a buyer already utilizing Amazon Athena for one-time queries. To higher accommodate huge knowledge processing and complicated question situations, they goal to undertake a unified knowledge modeling language throughout completely different knowledge platforms. This led to the implementation of each Athena on dbt and Amazon Redshift on dbt architectures.

AWS Glue crawler crawls knowledge lake data from Amazon S3, producing a Information Catalog to help dbt on Amazon Athena knowledge modeling. For complicated question situations, AWS Glue performs extract, remodel, and cargo (ETL) processing, loading knowledge into the petabyte-scale knowledge warehouse, Amazon Redshift. Right here, knowledge modeling makes use of dbt on Amazon Redshift.

Lineage knowledge authentic information from each elements are loaded into an S3 bucket, offering knowledge help for end-to-end knowledge lineage evaluation.

The next picture is the structure diagram for the answer.

Figure 1-Architecture diagram of DBT modeling based on Athena and Redshift

Some essential issues:

This experiment makes use of the next knowledge dictionary:

Supply desk Software Goal desk
imdb.name_basics DBT/Athena stg_imdb__name_basics
imdb.title_akas DBT/Athena stg_imdb__title_akas
imdb.title_basics DBT/Athena stg_imdb__title_basics
imdb.title_crew DBT/Athena stg_imdb__title_crews
imdb.title_episode DBT/Athena stg_imdb__title_episodes
imdb.title_principals DBT/Athena stg_imdb__title_principals
imdb.title_ratings DBT/Athena stg_imdb__title_ratings
stg_imdb__name_basics DBT/Redshift new_stg_imdb__name_basics
stg_imdb__title_akas DBT/Redshift new_stg_imdb__title_akas
stg_imdb__title_basics DBT/Redshift new_stg_imdb__title_basics
stg_imdb__title_crews DBT/Redshift new_stg_imdb__title_crews
stg_imdb__title_episodes DBT/Redshift new_stg_imdb__title_episodes
stg_imdb__title_principals DBT/Redshift new_stg_imdb__title_principals
stg_imdb__title_ratings DBT/Redshift new_stg_imdb__title_ratings
new_stg_imdb__name_basics DBT/Redshift int_primary_profession_flattened_from_name_basics
new_stg_imdb__name_basics DBT/Redshift int_known_for_titles_flattened_from_name_basics
new_stg_imdb__name_basics DBT/Redshift names
new_stg_imdb__title_akas DBT/Redshift titles
new_stg_imdb__title_basics DBT/Redshift int_genres_flattened_from_title_basics
new_stg_imdb__title_basics DBT/Redshift titles
new_stg_imdb__title_crews DBT/Redshift int_directors_flattened_from_title_crews
new_stg_imdb__title_crews DBT/Redshift int_writers_flattened_from_title_crews
new_stg_imdb__title_episodes DBT/Redshift titles
new_stg_imdb__title_principals DBT/Redshift titles
new_stg_imdb__title_ratings DBT/Redshift titles
int_known_for_titles_flattened_from_name_basics DBT/Redshift titles
int_primary_profession_flattened_from_name_basics DBT/Redshift
int_directors_flattened_from_title_crews DBT/Redshift names
int_genres_flattened_from_title_basics DBT/Redshift genre_titles
int_writers_flattened_from_title_crews DBT/Redshift names
genre_titles DBT/Redshift
names DBT/Redshift
titles DBT/Redshift

The lineage knowledge generated by dbt on Athena consists of partial lineage diagrams, as exemplified within the following photographs. The primary picture exhibits the lineage of name_basics in dbt on Athena. The second picture exhibits the lineage of title_crew in dbt on Athena.

Figure 3-Lineage of name_basics in DBT on Athena

Figure 4-Lineage of title_crew in DBT on Athena

The lineage knowledge generated by dbt on Amazon Redshift consists of partial lineage diagrams, as illustrated within the following picture.

Figure 5-Lineage of name_basics and title_crew in DBT on Redshift

Referring to the information dictionary and screenshots, it’s evident that the whole knowledge lineage data is extremely dispersed, unfold throughout 29 lineage diagrams. Understanding the end-to-end complete view requires vital time. In real-world environments, the scenario is commonly extra complicated, with full knowledge lineage doubtlessly distributed throughout lots of of information. Consequently, integrating a whole end-to-end knowledge lineage diagram turns into essential and difficult.

This experiment will present an in depth introduction to processing and merging knowledge lineage information saved in Amazon S3, as illustrated within the following diagram.

Figure 6-Merging data lineage from Athena and Redshift into Neptune

Stipulations

To carry out the answer, you must have the next stipulations in place:

  • The Lambda perform for preprocessing lineage information will need to have permissions to entry Amazon S3 and Amazon Redshift.
  • The Lambda perform for developing the directed acyclic graph (DAG) will need to have permissions to entry Amazon S3 and Amazon Neptune.

Answer walkthrough

To carry out the answer, observe the steps within the subsequent sections.

Preprocess uncooked lineage knowledge for DAG era utilizing Lambda features

Use Lambda to preprocess the uncooked lineage knowledge generated by dbt, changing it into key-value pair JSON information which might be simply understood by Neptune: athena_dbt_lineage_map.json and redshift_dbt_lineage_map.json.

  1. To create a brand new Lambda perform within the Lambda console, enter a Operate title, choose the Runtime (Python on this instance), configure the Structure and Execution position, then click on the “Create perform” button.

Figure 7-Basic configuration of athena-data-lineage-process Lambda

  1. Open the created Lambda perform and on the Configuration tab, within the navigation pane, choose Surroundings variables and select your configurations. Utilizing Athena on dbt processing for instance, configure the atmosphere variables as follows (the method for Amazon Redshift on dbt is analogous):
    • INPUT_BUCKET: data-lineage-analysis-24-09-22 (change with the S3 bucket path storing the unique Athena on dbt lineage information)
    • INPUT_KEY: athena_manifest.json (the unique Athena on dbt lineage file)
    • OUTPUT_BUCKET: data-lineage-analysis-24-09-22 (change with the S3 bucket path for storing the preprocessed output of Athena on dbt lineage information)
    • OUTPUT_KEY: athena_dbt_lineage_map.json (the output file after preprocessing the unique Athena on dbt lineage file)

Figure 8-Environment variable configuration for athena-data-lineage-process-Lambda

  1. On the Code tab, within the lambda_function.py file, enter the preprocessing code for the uncooked lineage knowledge. Right here’s a code reference utilizing Athena on dbt processing for instance (the method for Amazon Redshift on dbt is analogous). The preprocessing code for Athena on dbt’s authentic lineage file is as follows:

The athena_manifest.json, redshift_manifest.json, and different information used on this experiment could be obtained from the Information Lineage Graph Building GitHub repository.

import json
import boto3
import os

def lambda_handler(occasion, context):
    # Arrange S3 consumer
    s3 = boto3.consumer('s3')

    # Get enter and output paths from atmosphere variables
    input_bucket = os.environ['INPUT_BUCKET']
    input_key = os.environ['INPUT_KEY']
    output_bucket = os.environ['OUTPUT_BUCKET']
    output_key = os.environ['OUTPUT_KEY']

    # Outline helper perform
    def dbt_nodename_format(node_name):
        return node_name.cut up(".")[-1]

    # Learn enter JSON file from S3
    response = s3.get_object(Bucket=input_bucket, Key=input_key)
    file_content = response['Body'].learn().decode('utf-8')
    knowledge = json.masses(file_content)
    lineage_map = knowledge["child_map"]
    node_dict = {}
    dbt_lineage_map = {}

    # Course of knowledge
    for merchandise in lineage_map:
        lineage_map[item] = [dbt_nodename_format(child) for child in lineage_map[item]]
        node_dict[item] = dbt_nodename_format(merchandise)

    # Replace key names
    lineage_map = {node_dict[old]: worth for previous, worth in lineage_map.gadgets()}
    dbt_lineage_map["lineage_map"] = lineage_map

    # Convert end result to JSON string
    result_json = json.dumps(dbt_lineage_map)

    # Write JSON string to S3
    s3.put_object(Physique=result_json, Bucket=output_bucket, Key=output_key)
    print(f"Information written to s3://{output_bucket}/{output_key}")

    return {
        'statusCode': 200,
        'physique': json.dumps('Athena knowledge lineage processing accomplished efficiently')
    }

Merge preprocessed lineage knowledge and write to Neptune utilizing Lambda features

  1. Earlier than processing knowledge with the Lambda perform, create a Lambda layer by importing the required Gremlin plugin. For detailed steps on creating and configuring Lambda Layers, see the AWS Lambda Layers documentation.

As a result of connecting Lambda to Neptune for developing a DAG requires the Gremlin plugin, it must be uploaded earlier than utilizing Lambda. The Gremlin package deal could be obtained from the Information Lineage Graph Building GitHub repository.

Figure 9-Lambda layers

  1. Create a brand new Lambda perform. Select the perform to configure. To the lately created layer, on the backside of the web page, select Add a layer.

Figure 10_Add a layer

Create one other Lambda layer for the requests library, much like the way you created the layer for the Gremlin plugin. This library might be used for HTTP consumer performance within the Lambda perform.

  1. Select the lately created Lambda perform to configure. Hook up with Neptune by Lambda to merge the 2 datasets and assemble a DAG. On the Code tab, the reference code to execute is as follows:
import json
import boto3
import os
import requests
from botocore.auth import SigV4Auth
from botocore.awsrequest import AWSRequest
from botocore.credentials import get_credentials
from botocore.session import Session
from concurrent.futures import ThreadPoolExecutor, as_completed

def read_s3_file(s3_client, bucket, key):
    strive:
        response = s3_client.get_object(Bucket=bucket, Key=key)
        knowledge = json.masses(response['Body'].learn().decode('utf-8'))
        return knowledge.get("lineage_map", {})
    besides Exception as e:
        print(f"Error studying S3 file {bucket}/{key}: {str(e)}")
        elevate

def merge_data(athena_data, redshift_data):
    return {**athena_data, **redshift_data}

def sign_request(request):
    credentials = get_credentials(Session())
    auth = SigV4Auth(credentials, 'neptune-db', os.environ['AWS_REGION'])
    auth.add_auth(request)
    return dict(request.headers)

def send_request(url, headers, knowledge):
    strive:
        response = requests.submit(url, headers=headers, knowledge=knowledge, timeout=30)
        response.raise_for_status()
        return response.textual content
    besides requests.exceptions.RequestException as e:
        print(f"Request Error: {str(e)}")
        if hasattr(e.response, 'textual content'):
            print(f"Response content material: {e.response.textual content}")
        elevate

def write_to_neptune(knowledge):
    endpoint="https://your neptune endpoint title:8182/gremlin"
    # change along with your neptune endpoint title

    # Clear Neptune database
    clear_query = "g.V().drop()"
    request = AWSRequest(methodology='POST', url=endpoint, knowledge=json.dumps({'gremlin': clear_query}))
    signed_headers = sign_request(request)
    response = send_request(endpoint, signed_headers, json.dumps({'gremlin': clear_query}))
    print(f"Clear database response: {response}")

    # Confirm if the database is empty
    verify_query = "g.V().depend()"
    request = AWSRequest(methodology='POST', url=endpoint, knowledge=json.dumps({'gremlin': verify_query}))
    signed_headers = sign_request(request)
    response = send_request(endpoint, signed_headers, json.dumps({'gremlin': verify_query}))
    print(f"Vertex depend after clearing: {response}")
    
    def process_node(node, youngsters):
        # Add node
        question = f"g.V().has('lineage_node', 'node_name', '{node}').fold().coalesce(unfold(), addV('lineage_node').property('node_name', '{node}'))"
        request = AWSRequest(methodology='POST', url=endpoint, knowledge=json.dumps({'gremlin': question}))
        signed_headers = sign_request(request)
        response = send_request(endpoint, signed_headers, json.dumps({'gremlin': question}))
        print(f"Add node response for {node}: {response}")

        for child_node in youngsters:
            # Add little one node
            question = f"g.V().has('lineage_node', 'node_name', '{child_node}').fold().coalesce(unfold(), addV('lineage_node').property('node_name', '{child_node}'))"
            request = AWSRequest(methodology='POST', url=endpoint, knowledge=json.dumps({'gremlin': question}))
            signed_headers = sign_request(request)
            response = send_request(endpoint, signed_headers, json.dumps({'gremlin': question}))
            print(f"Add little one node response for {child_node}: {response}")

            # Add edge
            question = f"g.V().has('lineage_node', 'node_name', '{node}').as('a').V().has('lineage_node', 'node_name', '{child_node}').coalesce(inE('lineage_edge').the place(outV().as('a')), addE('lineage_edge').from('a').property('edge_name', ' '))"
            request = AWSRequest(methodology='POST', url=endpoint, knowledge=json.dumps({'gremlin': question}))
            signed_headers = sign_request(request)
            response = send_request(endpoint, signed_headers, json.dumps({'gremlin': question}))
            print(f"Add edge response for {node} -> {child_node}: {response}")

    with ThreadPoolExecutor(max_workers=10) as executor:
        futures = [executor.submit(process_node, node, children) for node, children in data.items()]
        for future in as_completed(futures):
            strive:
                future.end result()
            besides Exception as e:
                print(f"Error in processing node: {str(e)}")

def lambda_handler(occasion, context):
    # Initialize S3 consumer
    s3_client = boto3.consumer('s3')

    # S3 bucket and file paths
    bucket_name="data-lineage-analysis" # Substitute along with your S3 bucket title
    athena_key = 'athena_dbt_lineage_map.json' # Substitute along with your athena lineage key worth output json title
    redshift_key = 'redshift_dbt_lineage_map.json' # Substitute along with your redshift lineage key worth output json title

    strive:
        # Learn Athena lineage knowledge
        athena_data = read_s3_file(s3_client, bucket_name, athena_key)
        print(f"Athena knowledge measurement: {len(athena_data)}")

        # Learn Redshift lineage knowledge
        redshift_data = read_s3_file(s3_client, bucket_name, redshift_key)
        print(f"Redshift knowledge measurement: {len(redshift_data)}")

        # Merge knowledge
        combined_data = merge_data(athena_data, redshift_data)
        print(f"Mixed knowledge measurement: {len(combined_data)}")

        # Write to Neptune (together with clearing the database)
        write_to_neptune(combined_data)

        return {
            'statusCode': 200,
            'physique': json.dumps('Information efficiently written to Neptune')
        }
    besides Exception as e:
        print(f"Error in lambda_handler: {str(e)}")
        return {
            'statusCode': 500,
            'physique': json.dumps(f'Error: {str(e)}')
        }

Create Step Features workflow

  1. On the Step Features console, select State machines, after which select Create state machine. On the Select a template web page, choose Clean template.

Figure 11-Step Functions blank template

  1. Within the Clean template, select Code to outline your state machine. Use the next instance code:
{
  "Remark": "Every day Information Lineage Processing Workflow",
  "StartAt": "Parallel Processing",
  "States": {
    "Parallel Processing": {
      "Kind": "Parallel",
      "Branches": [
        {
          "StartAt": "Process Athena Data",
          "States": {
            "Process Athena Data": {
              "Type": "Task",
              "Resource": "arn:aws:states:::lambda:invoke",
              "Parameters": {
                "FunctionName": "athena-data-lineange-process-Lambda", ##Replace with your Athena data lineage process Lambda function name
                "Payload": {
                  "input.$": "$"
                }
              },
              "End": true
            }
          }
        },
        {
          "StartAt": "Process Redshift Data",
          "States": {
            "Process Redshift Data": {
              "Type": "Task",
              "Resource": "arn:aws:states:::lambda:invoke",
              "Parameters": {
                "FunctionName": "redshift-data-lineange-process-Lambda", ##Replace with your Redshift data lineage process Lambda function name
                "Payload": {
                  "input.$": "$"
                }
              },
              "End": true
            }
          }
        }
      ],
      "Subsequent": "Load Information to Neptune"
    },
    "Load Information to Neptune": {
      "Kind": "Process",
      "Useful resource": "arn:aws:states:::lambda:invoke",
      "Parameters": {
        "FunctionName": "data-lineage-analysis-lambda" ##Substitute along with your Lambda perform Identify
      },
      "Finish": true
    }
  }
}

  1. After finishing the configuration, select the Design tab to view the workflow proven within the following diagram.

Figure 12-Step Functions design view

Create scheduling guidelines with Amazon EventBridge

Configure Amazon EventBridge to generate lineage knowledge each day throughout off-peak enterprise hours. To do that:

  1. Create a brand new rule within the EventBridge console with a descriptive title.
  2. Set the rule sort to “Schedule” and configure it to run as soon as each day (utilizing both a set price or the Cron expression “0 0 * * ? *”).
  3. Choose the AWS Step Features state machine because the goal and specify the state machine you created earlier.

Question ends in Neptune

  1. On the Neptune console, choose Notebooks. Open an present pocket book or create a brand new one.

Figure 13-Neptune notebook

  1. Within the pocket book, create a brand new code cell to carry out a question. The next code instance exhibits the question assertion and its outcomes:
%%gremlin -d node_name -de edge_name
g.V().hasLabel('lineage_node').outE('lineage_edge').inV().hasLabel('lineage_node').path().by(elementMap())

Now you can see the end-to-end knowledge lineage graph data for each dbt on Athena and dbt on Amazon Redshift. The next picture exhibits the merged DAG knowledge lineage graph in Neptune.

Figure 14-Merged DAG data lineage graph in Neptune

You may question the generated knowledge lineage graph for knowledge associated to a particular desk, equivalent to title_crew.

The pattern question assertion and its outcomes are proven within the following code instance:

%%gremlin -d node_name -de edge_name
g.V().has('lineage_node', 'node_name', 'title_crew')
  .repeat(
    union(
      __.inE('lineage_edge').outV(),
      __.outE('lineage_edge').inV()
    )
  )
  .till(
    __.has('node_name', inside('names', 'genre_titles', 'titles'))
    .or()
    .loops().is(gt(10))
  )
  .path()
  .by(elementMap())

The next picture exhibits the filtered outcomes primarily based on title_crew desk in Neptune.

Figure 15-Filtered results based on title_crew table in Neptune

Clear up

To scrub up your sources, full the next steps:

  1. Delete EventBridge guidelines
# Cease new occasions from triggering whereas eradicating dependencies
aws occasions disable-rule --name 
# Break connections between rule and targets (like Lambda features)
aws occasions remove-targets --rule  --ids 
# Take away the rule utterly from EventBridge
aws occasions delete-rule --name 

  1. Delete Step Features state machine
# Cease all operating executions
aws stepfunctions stop-execution --execution-arn 
# Delete the state machine
aws stepfunctions delete-state-machine --state-machine-arn 

  1. Delete Lambda features
# Delete Lambda perform
aws lambda delete-function --function-name 
# Delete Lambda layers (if used)
aws lambda delete-layer-version --layer-name  --version-number 

  1. Clear up the Neptune database
# Delete all snapshots
aws neptune delete-db-cluster-snapshot --db-cluster-snapshot-identifier 
# Delete database occasion
aws neptune delete-db-instance --db-instance-identifier  --skip-final-snapshot
# Delete database cluster
aws neptune delete-db-cluster --db-cluster-identifier  --skip-final-snapshot

  1. Observe the directions at Deleting a single object to wash up the S3 buckets

Conclusion

On this submit, we demonstrated how dbt allows unified knowledge modeling throughout Amazon Athena and Amazon Redshift, integrating knowledge lineage from each one-time and complicated queries. Through the use of Amazon Neptune, this answer gives complete end-to-end lineage evaluation. The structure makes use of AWS serverless computing and managed providers, together with Step Features, Lambda, and EventBridge, offering a extremely versatile and scalable design.

This strategy considerably lowers the training curve by a unified knowledge modeling methodology whereas enhancing improvement effectivity. The tip-to-end knowledge lineage graph visualization and evaluation not solely strengthen knowledge governance capabilities but additionally provide deep insights for decision-making.

The answer’s versatile and scalable structure successfully optimizes operational prices and improves enterprise responsiveness. This complete strategy balances technical innovation, knowledge governance, operational effectivity, and cost-effectiveness, thus supporting long-term enterprise progress with the adaptability to satisfy evolving enterprise wants.

With OpenLineage-compatible knowledge lineage now usually obtainable in Amazon DataZone, we plan to discover integration prospects to additional improve the system’s functionality to deal with complicated knowledge lineage evaluation situations.

If in case you have any questions, please be at liberty to go away a remark within the feedback part.


Concerning the authors

nancynwu+photo

Nancy Wu is a Options Architect at AWS, accountable for cloud computing structure consulting and design for multinational enterprise prospects. Has a few years of expertise in massive knowledge, enterprise digital transformation analysis and improvement, consulting, and venture administration throughout telecommunications, leisure, and monetary industries.

Xu+Feng+PhotoXu Feng is a Senior Trade Answer Architect at AWS, accountable for designing, constructing, and selling trade options for the Media & Leisure and Promoting sectors, equivalent to clever customer support and enterprise intelligence. With 20 years of software program trade expertise, presently targeted on researching and implementing generative AI and AI-powered knowledge options.

Xu+Da+PhotoXu Da is a Amazon Net Providers (AWS) Associate Options Architect primarily based out of Shanghai, China. He has greater than 25 years of expertise in IT trade, software program improvement and answer structure. He’s captivated with collaborative studying, data sharing, and guiding neighborhood of their cloud applied sciences journey.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

[td_block_social_counter facebook="tagdiv" twitter="tagdivofficial" youtube="tagdiv" style="style8 td-social-boxed td-social-font-icons" tdc_css="eyJhbGwiOnsibWFyZ2luLWJvdHRvbSI6IjM4IiwiZGlzcGxheSI6IiJ9LCJwb3J0cmFpdCI6eyJtYXJnaW4tYm90dG9tIjoiMzAiLCJkaXNwbGF5IjoiIn0sInBvcnRyYWl0X21heF93aWR0aCI6MTAxOCwicG9ydHJhaXRfbWluX3dpZHRoIjo3Njh9" custom_title="Stay Connected" block_template_id="td_block_template_8" f_header_font_family="712" f_header_font_transform="uppercase" f_header_font_weight="500" f_header_font_size="17" border_color="#dd3333"]
- Advertisement -spot_img

Latest Articles